* Initialize the diganostics logger with opts.Debug when doing
a Deploy, like we do Plan.
* Don't spew leaked promises if there were Log.errors.
* Serialize logging RPC calls so that they can't appear out of order.
* Print stack traces in more places and, in particular, remember
the original context for any errors that may occur asynchronously,
like resource registration and calls to mapValue.
* Include origin stack traces generally in more error messages.
* Add some more mapValue test cases.
* Only undefined-propagate mapValue values during dry-runs.
This change upgrades gRPC to 1.6.0 to pick up a few bug fixes.
We also use the full address for gRPC endpoints, including the
interface name, as otherwise we pick the wrong interface on Linux.
There's a fair bit of clean up in here, but the meat is:
* Allocate the language runtime gRPC client connection on the
goroutine that will use it; this eliminates race conditions.
* The biggie: there *appears* to be a bug in gRPC's implementation
on Linux, where it doesn't implement WaitForReady properly. The
behavior I'm observing is that RPC calls will not retry as they
are supposed to, but will instead spuriously fail during the RPC
startup. To work around this, I've added manual retry logic in
the shared plugin creation function so that we won't even try
to use the client connection until it is in a well-known state.
pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
ideally, removing the work around.
The other minor things are:
* Separate run.js into its own module, so it doesn't include
index.js and do a bunch of random stuff it shouldn't be doing.
* Allow run.js to be invoked without a --monitor. This makes
testing just the run part of invocation easier (including
config, which turned out to be super useful as I was debugging).
* Tidy up some messages.
* Use `global.hasOwnProperty(ident)`, rather than `global[ident] !== undefined`,
to avoid classifying references to globals as free variables. Surprise(!!),
the prior logic wouldn't work for `undefined` itself... 😒
* Expand this check to include the built-in Node.js module variables, namely
`__dirname`, `__filename`, `exports`, `module`, and `require`, so that
references to them don't get classified as serializable free variables either.
* Place catch variables in scope, so that `catch (err) { ... }` won't yield
free variables for references to `err` within `...`.
* Place recursive function definitions into the top-level `var`-like scope of
variables so that we don't consider references to them free.
* Harden all error pathways in the native C++ add-on so that we terminate
anytime an exception is in-flight, rather than limping along and making
things worse...
If a resource's planning operation is to do nothing, we can safely
assume that all of its properties are stable. This can be used during
planning to avoid cascading updates that we know will never happen.
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways. Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.
In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.
As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package. This fixespulumi/pulumi#293.
This change simplifies the provider RPC interface slightly:
1) Eliminate Get. We really don't need it anymore. There are
several possibly-interesting scenarios down the road that may
demand it, but when we get there, we can consider how best to
bring this back. Furthermore, the old-style Get remains mostly
incompatible with Terraform anyway.
2) Pass URNs, not type tokens, across the RPC boundary. This gives
the provider access to more interesting information: the type,
still, but also the name (which is no longer an object property).
This makes a few tweaks to get the integration tests passing:
* Add `runtime: nodejs` to the minimal example's `Lumi.yaml` file.
* Remove usage of `@lumi/lumirt { printf }` and just use `console.log`.
* Remove calls to `lumijs` in the integration test framework and
the minimal example's package.json. Instead, we just run
`yarn run build`, which itself internally just invokes `tsc`.
* Add package validation logic and eliminate the pkg/compiler/metadata
library, in favor of the simpler code in pkg/engine.
* Simplify the Node.js langhost plugin CLI, and simply take an
argument rather than requiring required and optional --flags.
* Use a default path of "." if the program path isn't provided. This
is a legal scenario if you've passed a pwd and just want to load
the package's default module ("./index.js" or whatever main says).
* Add an executable script, lumi-langhost-nodejs, that fires up the
`bin/cmd/langhost/index.js` file to serve the Node.js language plugin.
The existing implementation of the interface (backed by the file
system) has moved into cmd/lumi. The deployment service will start to
provide its own version.
`saveEnv` had a flag which would prevent an environment from being
overwritten if it already existed, which was only used by `lumi env
init`. Refactor the code so the check is done inside `lumi` instead of
against this API. We don't need this functionality for the service and
so requiring support for this at the API boundary for environments
feels like a bad idea.
We'd like to abstract out environment CRUD operations and I'd prefer
not to have to bake in the conspect of a file name like thing in the
abstraction. Since we were not really using this feature many places,
let's just get rid of it.
The implementation of these functions will be moving out of the engine
and into `lumi` itself, it's a little easier if we move away from
spewing stuff to the diag interface, so just use glog instead (which
`lumi` already uses for logging)
This helps avoid running into file handle limits when creating archives including thousands of node_modules files.
Tracking a more complete fix through all other codepaths related to assets as part of #325.
This change ensures we close all Blobs in the asset/archive logic.
In particular, the archive.Read function returns a map of files to
Blobs and after we are done copying the contents we must ensure
that we invoke Close, otherwise we may leak file handles, sockets,
and so on. This may or may not be the culprit to the "too many
files open" errors we are hitting while deploying the M5 bits.
This refactors the engine so all of the APIs on it are instance
methods on the type instead of raw methods that float around and use
data from a global engine.
A mechcanical change as we remove the global `E` and then make
anything that interacted with that in pkg/engine to be an instance
method and the dealing with the fallout.
Instead of talking directly to stdout or stderr via methods on fmt,
indirect through an Engine type (presently a global, but soon to
change) to allow control of where the streams actually end up.
Prevously, we would throw raw args arrays across the interface and the
engine would do some additional parsing. Clean this up so we don't do
that and all the parsing stays in `lumi`
Previous logic had assumed arrays were treated like objects
in the runtime (like in ECMAScript), but they are a unique
value kind in current Lumi runtime, so must be handled separately.
Adds an `ExtraRuntimeValidation` hook to the test harness.
This runs after the test app is deployed, and can be used to test publically
exposed endpoints on the example to validate additional runtime correctness
of the Lumi app under test.