pulumi/sdk/nodejs/Makefile

83 lines
3.2 KiB
Makefile
Raw Permalink Normal View History

PROJECT_NAME := Pulumi Node.JS SDK
NODE_MODULE_NAME := @pulumi/pulumi
VERSION := $(shell cd ../../ && pulumictl get version --language javascript)
LANGUAGE_HOST := github.com/pulumi/pulumi/sdk/v3/nodejs/cmd/pulumi-language-nodejs
PROJECT_ROOT := $(realpath ../..)
PROJECT_PKGS := $(shell go list ./cmd...)
TESTPARALLELISM := 10
TEST_FAST_TIMEOUT := 2m
# Motivation: running `make TEST_ALL_DEPS= test_all` permits running
# `test_all` without the dependencies.
TEST_ALL_DEPS = build
include ../../build/common.mk
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
export PATH:=$(shell yarn bin 2>/dev/null):$(PATH)
lint::
./node_modules/.bin/eslint -c .eslintrc.js --ext .ts .
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
Add a Dockerfile for the Pulumi CLI This introduces a Dockerfile for the Pulumi CLI. This makes it easier to develop and test the engine in a self-contained environment, in addition to being suitable for running the actual CLI itself. For instance, $ docker run pulumi/pulumi -e "PULUMI_ACCESS_TOKEN=x" up will run the Pulumi program mounted under the /app volume. This will be used in some upcoming CI/CD scenarios. This uses multi-stage builds, and Debian Stretch as the base, for relatively fast and lean build times and resulting images. We are intentional about restoring dep packages independent of the actual source code so that we don't end up needlessly re-depping, which can consume quite a bit of time. After fixing https://github.com/pulumi/pulumi/issues/1986, we should explore an Alpine base image option. I made the decision to keep this image scoped to just the Go builds. Therefore, none of the actual SDK packages themselves are built, just the engine, CLI, and language plugins for Node.js, Python, and Go. It's possible to create a mega-container that has all of these full environments so that we can rebuild them too, but for now I figured it was better to rely on package management for them. Another alternative would have been to install released binaries, rather than building them. To keep the useful flow for development, however, I decided to go the build route for now. If we build at the same hashes, the resulting binaries "should" be ~identical anyhow. I've created a pulumi/pulumi Docker Hub repo that we can publish this into. For now, there is no CI publishing of the image. This fixes pulumi/pulumi#1991.
2018-09-29 20:43:35 +02:00
build_package::
./node_modules/.bin/tsc
cp tests/runtime/jsClosureCases_8.js bin/tests/runtime
cp tests/runtime/jsClosureCases_10_4.js bin/tests/runtime
2020-09-14 16:55:06 +02:00
cp -R tests/automation/data/. bin/tests/automation/data/
cp README.md ../../LICENSE package.json ./dist/* bin/
node ../../scripts/reversion.js bin/package.json ${VERSION}
node ../../scripts/reversion.js bin/version.js ${VERSION}
cp -R proto/. bin/proto/
mkdir -p bin/tests/runtime/langhost/cases/
find tests/runtime/langhost/cases/* -type d -exec cp -R {} bin/tests/runtime/langhost/cases/ \;
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
Add a Dockerfile for the Pulumi CLI This introduces a Dockerfile for the Pulumi CLI. This makes it easier to develop and test the engine in a self-contained environment, in addition to being suitable for running the actual CLI itself. For instance, $ docker run pulumi/pulumi -e "PULUMI_ACCESS_TOKEN=x" up will run the Pulumi program mounted under the /app volume. This will be used in some upcoming CI/CD scenarios. This uses multi-stage builds, and Debian Stretch as the base, for relatively fast and lean build times and resulting images. We are intentional about restoring dep packages independent of the actual source code so that we don't end up needlessly re-depping, which can consume quite a bit of time. After fixing https://github.com/pulumi/pulumi/issues/1986, we should explore an Alpine base image option. I made the decision to keep this image scoped to just the Go builds. Therefore, none of the actual SDK packages themselves are built, just the engine, CLI, and language plugins for Node.js, Python, and Go. It's possible to create a mega-container that has all of these full environments so that we can rebuild them too, but for now I figured it was better to rely on package management for them. Another alternative would have been to install released binaries, rather than building them. To keep the useful flow for development, however, I decided to go the build route for now. If we build at the same hashes, the resulting binaries "should" be ~identical anyhow. I've created a pulumi/pulumi Docker Hub repo that we can publish this into. For now, there is no CI publishing of the image. This fixes pulumi/pulumi#1991.
2018-09-29 20:43:35 +02:00
build_plugin::
go install -ldflags "-X github.com/pulumi/pulumi/sdk/v3/go/common/version.Version=${VERSION}" ${LANGUAGE_HOST}
Add a Dockerfile for the Pulumi CLI This introduces a Dockerfile for the Pulumi CLI. This makes it easier to develop and test the engine in a self-contained environment, in addition to being suitable for running the actual CLI itself. For instance, $ docker run pulumi/pulumi -e "PULUMI_ACCESS_TOKEN=x" up will run the Pulumi program mounted under the /app volume. This will be used in some upcoming CI/CD scenarios. This uses multi-stage builds, and Debian Stretch as the base, for relatively fast and lean build times and resulting images. We are intentional about restoring dep packages independent of the actual source code so that we don't end up needlessly re-depping, which can consume quite a bit of time. After fixing https://github.com/pulumi/pulumi/issues/1986, we should explore an Alpine base image option. I made the decision to keep this image scoped to just the Go builds. Therefore, none of the actual SDK packages themselves are built, just the engine, CLI, and language plugins for Node.js, Python, and Go. It's possible to create a mega-container that has all of these full environments so that we can rebuild them too, but for now I figured it was better to rely on package management for them. Another alternative would have been to install released binaries, rather than building them. To keep the useful flow for development, however, I decided to go the build route for now. If we build at the same hashes, the resulting binaries "should" be ~identical anyhow. I've created a pulumi/pulumi Docker Hub repo that we can publish this into. For now, there is no CI publishing of the image. This fixes pulumi/pulumi#1991.
2018-09-29 20:43:35 +02:00
build:: build_package build_plugin
install_package:: build
2018-02-17 21:10:21 +01:00
cp dist/pulumi-resource-pulumi-nodejs "$(PULUMI_BIN)"
cp dist/pulumi-analyzer-policy "$(PULUMI_BIN)"
install_plugin:: build
GOBIN=$(PULUMI_BIN) go install -ldflags "-X github.com/pulumi/pulumi/sdk/v3/go/common/version.Version=${VERSION}" ${LANGUAGE_HOST}
Add a Dockerfile for the Pulumi CLI This introduces a Dockerfile for the Pulumi CLI. This makes it easier to develop and test the engine in a self-contained environment, in addition to being suitable for running the actual CLI itself. For instance, $ docker run pulumi/pulumi -e "PULUMI_ACCESS_TOKEN=x" up will run the Pulumi program mounted under the /app volume. This will be used in some upcoming CI/CD scenarios. This uses multi-stage builds, and Debian Stretch as the base, for relatively fast and lean build times and resulting images. We are intentional about restoring dep packages independent of the actual source code so that we don't end up needlessly re-depping, which can consume quite a bit of time. After fixing https://github.com/pulumi/pulumi/issues/1986, we should explore an Alpine base image option. I made the decision to keep this image scoped to just the Go builds. Therefore, none of the actual SDK packages themselves are built, just the engine, CLI, and language plugins for Node.js, Python, and Go. It's possible to create a mega-container that has all of these full environments so that we can rebuild them too, but for now I figured it was better to rely on package management for them. Another alternative would have been to install released binaries, rather than building them. To keep the useful flow for development, however, I decided to go the build route for now. If we build at the same hashes, the resulting binaries "should" be ~identical anyhow. I've created a pulumi/pulumi Docker Hub repo that we can publish this into. For now, there is no CI publishing of the image. This fixes pulumi/pulumi#1991.
2018-09-29 20:43:35 +02:00
install:: install_package install_plugin
istanbul_tests:: $(TEST_ALL_DEPS)
$(RUN_TESTSUITE) istanbul ./node_modules/.bin/istanbul test --print none _mocha -- --timeout 120000 --exclude 'bin/tests/automation/**/*.spec.js' 'bin/tests/**/*.spec.js'
2019-11-05 20:17:07 +01:00
./node_modules/.bin/istanbul report text-summary
./node_modules/.bin/istanbul report text
$(RUN_TESTSUITE) istanbul-with-mocks ./node_modules/.bin/istanbul test --print none _mocha -- 'bin/tests_with_mocks/**/*.spec.js'
auto_tests:: $(TEST_ALL_DEPS)
$(RUN_TESTSUITE) auto-nodejs ./node_modules/.bin/istanbul test --print none _mocha -- --timeout 120000 'bin/tests/automation/**/*.spec.js'
./node_modules/.bin/istanbul report text-summary
./node_modules/.bin/istanbul report text
sxs_tests:: $(TEST_ALL_DEPS)
pushd tests/sxs_ts_3.6 && yarn ; tsc ; popd
pushd tests/sxs_ts_latest && yarn ; tsc ; popd
test_fast:: sxs_tests istanbul_tests
$(GO_TEST_FAST) ${PROJECT_PKGS}
test_all:: sxs_tests istanbul_tests auto_tests
$(GO_TEST) ${PROJECT_PKGS}
dist:: build
go install -ldflags "-X github.com/pulumi/pulumi/sdk/v3/go/common/version.Version=${VERSION}" ${LANGUAGE_HOST}
cp dist/pulumi-resource-pulumi-nodejs "$$(go env GOPATH)"/bin/
cp dist/pulumi-analyzer-policy "$$(go env GOPATH)"/bin/
2020-05-14 05:38:27 +02:00
Ensure that make brew works as expected rather than passing empty version (#6566) Fixes:#6565 As part of #6460, the logic for determing the version of the build was moved to be a dependency on pulumictl. Unfortunately, the homebrew installs use the "make dist" command to build + install Pulumi to the user maching and as that would have a dependency on pulumictl and it not existing on the user machine, it would pass an empty version to the ldflag This then manifested to the user as: ``` ▶ pulumi version warning: A new version of Pulumi is available. To upgrade from version '0.0.0' to '2.22.0', run $ brew upgrade pulumi or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes. ``` We are able to mitigate this behaviour by bringing back the get-version script and using that script as part of the make brew installation We can see that the versions are the same between the 2 different installation techniques ``` make dist <------- uses pulumict DIST: go install -ldflags "-X github.com/pulumi/pulumi/sdk/v2/go/common/version.Version=2.24.0-alpha.1616029310+787eb70a" github.com/pulumi/pulumi/sdk/v2/dotnet/cmd/pulumi-language-dotnet DIST: BUILD: ``` ``` make brew <----- uses the legacy script ▶ make brew BREW: go install -ldflags "-X github.com/pulumi/pulumi/sdk/v2/go/common/version.Version=v2.24.0-alpha.1616029310+g787eb70a2" github.com/pulumi/pulumi/sdk/v2/dotnet/cmd/pulumi-language-dotnet BREW: ``` A full post mortem will be carried out to ensure we mitigate these types of errors going forward and that we are able to better test these types of situations
2021-03-18 03:07:02 +01:00
brew:: BREW_VERSION := $(shell ../../scripts/get-version HEAD)
2020-05-14 05:38:27 +02:00
brew::
go install -ldflags "-X github.com/pulumi/pulumi/sdk/v3/go/common/version.Version=${VERSION}" ${LANGUAGE_HOST}
2020-05-14 05:38:27 +02:00
cp dist/pulumi-resource-pulumi-nodejs "$$(go env GOPATH)"/bin/
cp dist/pulumi-analyzer-policy "$$(go env GOPATH)"/bin/
2020-09-22 01:20:05 +02:00
publish:: build_package
bash -c ../../scripts/publish_npm.sh