* Lift snapshot management out of the engine
This PR is a prerequisite for parallelism by addressing a major problem
that the engine has to deal with when performing parallel resource
construction: parallel mutation of the global snapshot. This PR adds
a `SnapshotManager` type that is responsible for maintaining and
persisting the current resource snapshot. It serializes all reads and
writes to the global snapshot and persists the snapshot to persistent
storage upon every write.
As a side-effect of this, the core engine no longer needs to know about
snapshot management at all; all snapshot operations can be handled as
callbacks on deployment events. This will greatly simplify the
parallelization of the core engine.
Worth noting is that the core engine will still need to be able to read
the current snapshot, since it is interested in the dependency graphs
contained within. The full implications of that are out of scope of this
PR.
Remove dead code, Steps no longer need a reference to the plan iterator that created them
Fixing various issues that arise when bringing up pulumi-aws
Line length broke the build
Code review: remove dead field, fix yaml name error
Rebase against master, provide implementation of StackPersister for cloud backend
Code review feedback: comments on MutationStatus, style in snapshot.go
Code review feedback: move SnapshotManager to pkg/backend, change engine to use an interface SnapshotManager
Code review feedback: use a channel for synchronization
Add a comment and a new test
* Maintain two checkpoints, an immutable base and a mutable delta, and
periodically merge the two to produce snapshots
* Add a lot of tests - covers all of the non-error paths of BeginMutation and End
* Fix a test resource provider
* Add a few tests, fix a few issues
* Rebase against master, fixed merge
We've seen failures in CI where DNS lookups fail which cause our
operations against the service to fail, as well as other sorts of
timeouts.
Add a set of helper methods in a new httputil package that helps us do
retries on these operations, and then update our client library to use
them when we are doing GET requests. We also provide a way for non GET
requests to be retried, and use this when updating a lease (since it
is safe to retry multiple requests in this case).
The RPC provider interface needs a way to convey back to the engine
that a resource being read no longer exists. To do this, we'll return
the ID property that was read back. If it is empty, it means the
resource is gone. If it is non-empty, we expect it to match the input.
This change skips unknown IDs during read operations. This can happen
when a read is performed using the output property of another resource
during planning. This is intentionally supported via ID being an
Input<ID> and all we need to do for this to work correctly is skip the
actual provider RPC and the runtime will propagate unknown outputs as
usual.
This change lets plugin versions to float in two ways:
1) If a `pulumi plugin install` detects a newer version is available
already, there's no need to download and install the older version.
2) If the engine attempts to load a plugin at a particular version,
if a newer version is available, it will be accepted without error.
As part of this, we permit $PATH to have the final say when determining
which version to accept. That is, it can always override the choice.
Note that I highly suspect, in the limit, that we'll want to stop doing
this for major version incompatibilities. For now, since we don't
envision any such version changes imminently, this will suffice.
This change wires up the new Read RPC method in such a manner that
Pulumi programs can invoke it. This is technically not required for
refreshing state programmatically (as in pulumi/pulumi#1081), however
it's a feature we had eons ago and have wanted since (see
pulumi/pulumi#83), and will allow us to write code like
let vm = aws.ec2.Instance.get("my-vm", "i-07043cd97bd2c9cfc");
// use any property from here on out ...
The way this works is simply by bridging the Pulumi program via its
existing RPC connection to the engine, much like Invoke and
RegisterResource RPC requests already do, and then invoking the proper
resource provider in order to read the state. Note that some resources
cannot be uniquely identified by their ID alone, and so an extra
resource state bag may be provided with just those properties required.
This came almost for free (okay, not exactly) and will come in handy as
we start gaining experience with reading live state from resources.
This commit changes two things about our resource model:
* Stop performing Pulumi Engine-side diffing of resource state.
Instead, we defer to the resource plugins themselves to determine
whether a change was made and, if so, the extent of it. This
manifests as a simple change to the Diff function; it is done in
a backwards compatible way so that we continue with legacy diffing
for existing resource provider plugins.
* Add a Read RPC method for resource providers. It simply takes a
resource's ID and URN, plus an optional bag of further qualifying
state, and it returns the current property state as read back from
the actual live environment. Note that the optional bag of state
must at least include enough additional properties for resources
wherein the ID is insufficient for the provider to perform a lookup.
It may, however, include the full bag of prior state, for instance
in the case of a refresh operation.
This is part of pulumi/pulumi#1108.
* Improve the error message arising from missing required configs for
resource providers
If the resource provider that we are speaking to is new enough, it will send
across a list of keys and their descriptions alongside an error
indicating that the provider we are configuring is missing required
config. This commit packages up the list of missing keys into an error
that can be presented nicely to the user.
* Code review feedback: renaming simplification and correcting errors in comments
* Send structured errors across RPC boundaries
This brings us closer to gRPC best practices where we send structured
errors with error codes across RPC endpoints. The new "rpcerrors"
package can wrap errors from RPC endpoints, so RPC servers can attach
some additional context as to why a request failed.
* Code review feedback:
1. Rename rpcerrors -> rpcerror, better package name
2. Rename RPCError -> Error, RPCErrorCause -> ErrorCause, names
suggested by gometalinter to improve their package-qualified names
3. Fix import organization in rpcerror.go
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
This change uses the prior checkpoint's deployment manifest to pre-
populate all plugins required to complete the destroy operation. This
allows for subsequent attempts to load a resource's plugin to match the
already-loaded version. This approach obviously doesn't work in a
hypothetical future world where plugins for the same resource provider
are loaded side-by-side, but we already know that.
This takes the existing `apitype.Checkpoint` type and renames it to
`apitype.CheckpointV1` locking in the shape. In addition, we introduce
a `apitype.VersionedCheckpoint` type, which holds a version number and
a json document representing a checkpoint at that version. Now, when
reading a checkpoint, the CLI can determine if it's in a format it
understands, and fail gracefully if it is not.
While the CLI understands the older checkpoint version, it always
writes the newest version format, meaning that if you manage a
fire-and-forget stack with this version of the CLI, it will be
un-readable by previous versions.
Stacks managed by Pulumi.com are not impacted by this change.
Fixes: #887
* Improve error messages output by the CLI
This fixes a couple known issues with the way that we present errors
from the Pulumi CLI:
1. Any errors from RPC endpoints were bubbling up as they were to
the top-level, which was unfortunate because they contained
RPC-specific noise that we don't want to present to the user. This
commit unwraps errors from resource providers.
2. The "catastrophic error" message often got printed twice
3. Fatal errors are often printed twice, because our CLI top-level
prints out the fatal error that it receives before exiting. A lot of
the time this error has already been printed.
4. Errors were prefixed by PU####.
* Feedback: Omit the 'catastrophic' error message and use a less verbose error message as the final error
* Code review feedback: interpretRPCError -> resourceStateAndError
* Code review feedback: deleting some commented-out code, error capitalization
* Cleanup after rebase
When a stack has secrets, we now take the secret values and construct
a regular expression which is just an alternation of all the secret
values. Then, before pushing any string data into an Event, we run the
regular expression and replace all matches with '[secret]'.
Fixes#747
As it stands, we allow plugin load requests to race. Not only does this
create a situation in which we may load and then immediately throw away
a plugin (potentially leaking its process), it also creates the
possibility of races when reading from/writing to the various plugin
caches. These changes serialize all plugin loads and cache accesses by
running all accesses for a particular host in a single goroutine.
Fixes#1020.
This helper method is only really used for testing, but we should not
allow it to create a Key who's namespace has a colon (as ParseKey
would not build something like this).
This API was introduced to aid the refactoring, but it isn't something
we want to support long term. Remove it and for a few places, push
passing config.Key around more, instead of converting to the old type
eagerly.
When serializing config.Key's we now write them as <package>:<name>
instead of <package>:config:<name>. We continue to support reading the
older format for compatability with older files.
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
I'll be changing the structure of the representation of config.Key, so
let's write some tests first to ensure we can continue to treat
everything as JSON and YAML.
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).
Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
As it stands, we only configure those providers for which configuration
is present. This can lead to surprising failure modes if those providers
are then used to create resources. These changes ensure that all
resource providers that are not configured during plan initialization
are configured upon first load.
Fixes#758.
A hold-over from a previous experiment (LumiIDL) which we don't use
anymore. If we decide to bring that back, we can easily restore these
types, but for now, let's just remove this dead code.
Most of the errors in this package are holdovers from our previous
syetem where we had our own custom compiler and evaluator and are no
longer needed. The few we still use during plan applicaton (via the
diagnostics system, which is another component from the old system
that we still use) have been promoted into the diag package. Doing so,
allows us to not have to import "github.com/pkg/errors" as "goerr" in
some parts of the engine, a nice cleaup.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
Previously, we would prefer a plugin on the $PATH which is more or
less always the case for people hacking on `pulumi`. Later, when we
went to check the loaded plugin version matched the one we requested,
we fail.
Now, if we have a version, we'll first consult the local plugin
cache. If that fails, we'll fall back to the $PATH as we used to.
When we are loading a plugin without a version, we continue to use the
one on the $PATH (without testing the cache) on the assumption it is
newer.
In addition, we've turned the "plugin versions are mis-matched" from
an error into a warning. We expect that we'll only ever see this
warning when something strange is going on (since in the normal case,
we'll have found the exact version in the cache) but having it not
hard fail does help in development cases.
Fixes#977
This change adds a basic Python langhost RPC server. It's fairly
barebones and merely acts as a jumping off point for the Pulumi engine
to spawn a Python program. The host is written in Go, in contrast to
implementing the host in Python, and more closely resembles how I
expect the Node.js language host to work once pulumi/pulumi#331 is done.
1. Various idiomatic Go and TypeScript fixes
2. Add an integration test that end-to-end roundtrips dependency
information for a simple Pulumi program
3. Add an additional test assert that tests that dependency information
comes from the language host as expected
This change makes the engine backwards compatible with older
language host binaries, by simply ignoring GetRequiredPlugins
calls when the RPC server has not yet implemented it. This
is benign, since we will eventually fault plugins in on demand,
although it does mean that commands like `pulumi plugin install`
will become no-ops (which, thankfully, is what we want).
This commit does two things:
1. All dependencies of a resource, both implicit and explicit, are
communicated directly to the engine when registering a resource. The
engine keeps track of these dependencies and ultimately serializes
them out to the checkpoint file upon successful deployment.
2. Once a successful deployment is done, the new `pulumi stack
graph` command reads the checkpoint file and outputs the dependency
information within in the DOT format.
Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
Previously, the checkpoint manifest contained the full path to a plugin
binary, in places of its friendly name. Now that we must move to a model
where we install plugins in the PPC based on the manifest contents, we
actually need to store the name, in addition to the version (which is
already there). We still also capture the path for debugging purposes.
We have had a long-standing bug in here where we waiting on a
stdout channel that never got populated, when the language plugin
fails to load entirely. This would lead to hung processes. The
fix is simple: only wait for stdout/stderr channels to drain that
have actually been wired up to enjoy the requisite signaling.
This adds support for two things:
* Installing all plugins that a project requires with a single command:
$ pulumi plugin install
* Listing the plugins that this project requires:
$ pulumi plugin ls --project
$ pulumi plugin ls -p
This brings back the Node.js language plugin's GetRequiredPlugins
function, reimplemented in Go now that the language host has been
rewritten from JavaScript. Fairly rote translation, along with
some random fixes required to get tests passing again.
This change adds a GetRequiredPlugins RPC method to the language
host, enabling us to query it for its list of plugin requirements.
This is language-specific because it requires looking at the set
of dependencies (e.g., package.json files).
It also adds a call up front during any update/preview operation
to compute the set of plugins and require that they are present.
These plugins are populated in the cache and will be used for all
subsequent plugin-related operations during the engine's activity.
We now cache the language plugins, so that we may load them
eagerly too, which we never did previously due to the fact that
we needed to pass the monitor address at load time. This was a
bit bizarre anyhow, since it's really the Run RPC function that
needs this information. So, to enable caching and eager loading
-- which we need in order to invoke GetRequiredPlugins -- the
"phone home" monitor RPC address is passed at Run time.
In a subsequent change, we will switch to faulting in the plugins
that are missing -- rather than erroring -- in addition to
supporting the `pulumi plugin install` CLI command.
This change introduces a workspace.GetPluginPath function that probes
the central workspace cache of plugins for a matching plugin binary that
matches the desired kind, name, and, optionally, version. It also permits
overriding this with $PATH for developer scenarios.
The analyzer, language, and resource plugin logic now uses this function
for deciding which binary path to load at runtime.
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
My previous change to stop supplying unknown properties to providers
broke `pulumi preview` in the case of unknown inputs. This change
restores the previous behavior for previews only; the new unknown-free
behavior remains for applies.
Fixes#790.
Before these changes, we were inconsistent in our treatment of unknown
property values across the resource provider RPC interface. `Check` and
`Diff` were retaining unknown properties in inputs and outputs;
`Create`, `Update`, and `Delete` were not. This interacted badly with
recent changes to `Check` to return all provider inputs--i.e. not just
defaults--from that method: if an unknown input was provided, it would
be present in the returned inputs, which would eventually confuse the
differ by giving the appearance of changes where none were present.
These changes remove unknowns from the provider interface entirely:
unknown property values are never passed to a provider, and a provider
must never return an unknown property value.
This is the primary piece of the fix for pulumi/pulumi-terraform#93.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
This merging causes similar issues to those it did in `Check`, and
differs from the approach we take to `Diff`. This can causes problems
such as an inability to remove properties.
Use the new {en,de}crypt endpoints in the Pulumi.com API to secure
secret config values. The ciphertext for a secret config value is bound
to the stack to which it applies and cannot be shared with other stacks
(e.g. by copy/pasting it around in Pulumi.yaml). All secrets will need
to be encrypted once per target stack.
This change implements resource protection, as per pulumi/pulumi#689.
The overall idea is that a resource can be marked as "protect: true",
which will prevent deletion of that resource for any reason whatsoever
(straight deletion, replacement, etc). This is expressed in the
program. To "unprotect" a resource, one must perform an update setting
"protect: false", and then afterwards, they can delete the resource.
For example:
let res = new MyResource("precious", { .. }, { protect: true });
Afterwards, the resource will display in the CLI with a lock icon, and
any attempts to remove it will fail in the usual ways (in planning or,
worst case, during an actual update).
This was done by adding a new ResourceOptions bag parameter to the
base Resource types. This is unfortunately a breaking change, but now
is the right time to take this one. We had been adding new settings
one by one -- like parent and dependsOn -- and this new approach will
set us up to add any number of additional settings down the road,
without needing to worry about breaking anything ever again.
This is related to protected stacks, as described in
pulumi/pulumi-service#399. Most likely this will serve as a foundational
building block that enables the coarser grained policy management.
We do not need all of the information in the old state for this call, as
outputs will not be read by the provider during validation or defaults
computation.
This change adds a bit more tracing context to RPC marshaling
logging so that it's easier to attribute certain marshaling calls.
Prior to this, we'd just have a flat list of "marshaled property X"
without any information about what the marshaling pertained to.
This change passes a resource's old output state, so that it contains
everything -- defaults included -- for purposes of the provider's diffing.
Not doing so can lead the provider into thinking some of the requisite
state is missing.
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.
Note that this will require updating users of the integration test framework. By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
This change adds rudimentary delete-before-create support (see
pulumi/pulumi#450). This cannot possibly be complete until we also
implement pulumi/pulumi#624, becuase we may try to delete a resource
while it still has dependent resources (which almost certainly will
fail). But until then, we can use this to manually unwedge ourselves
for leaf-node resources that do not support old and new resources
living side-by-side.
This change just flows the project's "main" directory all the way
through to the plugins, fixing #667. In that work item, we discussed
alternative approaches, such as rewriting the asset paths, but this
is tricky because it's very tough to do without those absolute paths
somehow ending up in the checkpoint files. Just launching the
processes with the right pwd is far easier and safer, and it turns
out that, conveniently, we set up the plugin context in exactly the
same place that we read the project information.
This change adds a `pulumi stack output` command. When passed no
arguments, it prints all stack output properties, in exactly the
same format as `pulumi stack` does (just without all the other stuff).
More importantly, if you pass a specific output property, a la
`pulumi stack output clusterARN`, just that property will be printed,
in a scriptable-friendly manner. This will help us automate wiring
multiple layers of stacks together during deployments.
This fixespulumi/pulumi#659.
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update). This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.
Every single resource has a type prefix of
pulumi:pulumi:Stack$
which makes URNs quite lengthy without adding any value. Since
they all have this prefix, adding it doesn't help to disambiguate.
This change skips adding the parent URN part when it is the built-in
automatic stack type name.
At some point, we fixed a bug in the way state is managed for "same"
steps, which meant that we wouldn't see newly added output properties.
This had the effect that, if you had a stack already stood up, and
updated it to have output properties, we would miss them. (Stacks
stood up from scratch would still have them.) This fixes that problem,
in addition to two other things: 1) we need to sort output property
names to ensure a deterministic ordering, and 2) we need to also
unconditionally apply the outputs RPC coming in, to ensure that the
resulting resource always has the correct outputs (so that for example
deleting prior output properties actually deletes them).
Also add some testing for this area to make sure we don't break again.
Fixespulumi/pulumi#631.
These changes push the `config.{Map,Value}` interfaces further down into
the deployment engine so that configuration can be decrypted nearer to
its use.
This is the first part of the fix for pulumi/pulumi-ppc#112.
As documented in issue #616, the inputs/defaults/outputs model we have
today has fundamental problems. The crux of the issue is that our
current design requires that defaults present in the old state of a
resource are applied to the new inputs for that resource.
Unfortunately, it is not possible for the engine to decide which
defaults remain applicable and which do not; only the provider has that
knowledge.
These changes take a more tactical approach to resolving this issue than
that originally proposed in #616 that avoids breaking compatibility with
existing checkpoints. Rather than treating the Pulumi inputs as the
provider input properties for a resource, these inputs are first
translated by `Check`. In order to accommodate provider defaults that
were chosen for the old resource but should not change for the new,
`Check` now takes the old provider inputs as well as the new Pulumi
inputs. Rather than the Pulumi inputs and provider defaults, the
provider inputs returned by `Check` are recorded in the checkpoint file.
Put simply, these changes remove defaults as a first-class concept
(except inasmuch as is required to retain the ability to read old
checkpoint files) and move the responsibilty for manging and
merging defaults into the provider that supplies them.
Fixes#616.
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
This change adds a new manifest section to the checkpoint files.
The existing time moves into it, and we add to it the version of
the Pulumi CLI that created it, along with the names, types, and
versions of all plugins used to generate the file. There is a
magic cookie that we also use during verification.
This is to help keep us sane when debugging problems "in the wild,"
and I'm sure we will add more to it over time (checksum, etc).
For example, after an up, you can now see this in `pulumi stack`:
```
Current stack is demo:
Last updated at 2017-12-01 13:48:49.815740523 -0800 PST
Pulumi version v0.8.3-79-g1ab99ad
Plugin pulumi-provider-aws [resource] version v0.8.3-22-g4363e77
Plugin pulumi-langhost-nodejs [language] version v0.8.3-79-g77bb6b6
Checkpoint file is /Users/joeduffy/dev/code/src/github.com/pulumi/pulumi-aws/.pulumi/stacks/webserver/demo.json
```
This addresses pulumi/pulumi#628.
This change introduces automatic integrity checking for snapshots.
Hopefully this will help us track down what's going on in
pulumi/pulumi#613. Eventually we probably want to make this opt-in,
or disable it entirely other than for internal Pulumi debugging, but
until we add more complete DAG verification, it's relatively cheap
and is worthwhile to leave on for now.
The prior change was incorrectly handling snapshotting of replacement
operations. Further, in hindsight, the older model of having steps
manage their interaction with the snapshot marking was clearer, so
I've essentially brought that back, merging it with the other changes.
This change simplifies the necessary RPC changes for components.
Instead of a Begin/End pair, which complicates the whole system
because now we have the opportunity of a missing End call, we will
simply let RPCs come in that append outputs to existing states.
We need to invoke the post-step event hook *after* updating the
state snapshots, so that it will write out the updated state.
We also need to re-serialize the snapshot again after we receive
updated output properties, otherwise they could be missing if this
happens to be the last resource (e.g., as in Stacks).
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
When merging inputs and defaults in order to construct the set of inputs
for a call to `Create`, we must recursively merge each property value:
the provided defaults may contain nested values that must be present in
the merged result.
* Don't show +s, -s, and ~s deeply. The intended format here looks
more like
+ aws:iam/instanceProfile:InstanceProfile (create)
[urn=urn:pulumi:test::aws/minimal::aws/iam/instanceProfile:InstanceProfile::ip2]
name: "ip2-079a29f428dc9987"
path: "/"
role: "ir-d0a632e3084a0252"
versus
+ aws:iam/instanceProfile:InstanceProfile (create)
+ [urn=urn:pulumi:test::aws/minimal::aws/iam/instanceProfile:InstanceProfile::ip2]
+ name: "ip2-079a29f428dc9987"
+ path: "/"
+ role: "ir-d0a632e3084a0252"
This makes it easier to see the resources modified in the output.
* Print adds/deletes during updates as
- property: "x"
+ property: "y"
rather than
~ property: "x"
~ property: "y"
the latter of which doesn't really tell you what's new/old.
* Show parent indentation on output properties, so they line up correctly.
* Only print stack outputs if not undefined.
This change switches from child lists to parent pointers, in the
way resource ancestries are represented. This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).
This lets us show a more parent/child display in the output when
doing planning and updating. For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:
* cloud:timer:Timer: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
* cloud:function:Function: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
* aws:serverless:Function: (same)
[urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
~ aws:lambda/function:Function: (modify)
[id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
[urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
- code : archive(assets:2092f44) {
// etc etc etc
Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.
I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).
Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
The first exception relates to how we launch plugins. Plugin paths are
calculated using a well-known set of rules; this makes `gas` suspicious
due to the need to use a variable to store the path of the plugin.
The second and third are in test code and aren't terribly concerning.
The latter exception asks `gas` to ignore the access key we hard-code
into the integration tests for our Pulumi test account.
The fourth exception allows use to use more permissive permissions for
the `.pulumi` directory than `gas` would prefer. We use `755`; `gas`
wants `700` or stricter. `755` is the default for `mkdir` and `.git` and
so seems like a reasonable choice for us.
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
This change switches from child lists to parent pointers, in the
way resource ancestries are represented. This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).
This lets us show a more parent/child display in the output when
doing planning and updating. For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:
* cloud:timer:Timer: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
* cloud:function:Function: (same)
[urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
* aws:serverless:Function: (same)
[urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
~ aws:lambda/function:Function: (modify)
[id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
[urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
- code : archive(assets:2092f44) {
// etc etc etc
Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.
I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).
Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.
This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs. That resources has properties for each top-level export in the Node.js script.
Running `pulumi stack` will display the current value of these outputs.
This change fixes getProject to return the project name, as
originally intended. (One line was missing.)
It also adds an integration test for this.
Fixespulumi/pulumi#580.
Because the Pulumi.yaml file demarcates the boundary used when
uploading a program to the Pulumi.com service at the moment, we
have trouble when a Pulumi program uses "up and over" references.
For instance, our customer wants to build a Dockerfile located
in some relative path, such as `../../elsewhere/`.
To support this, we will allow the Pulumi.yaml file to live
somewhere other than the main Pulumi entrypoint. For example,
it can live at the root of the repo, while the Pulumi program
lives in, say, `infra/`:
Pulumi.yaml:
name: as-before
main: infra/
This fixespulumi/pulumi#575. Further work can be done here to
provide even more flexibility; see pulumi/pulumi#574.
When producing a snapshot for a plan, we have two resource DAGs. One of
these is the base DAG for the plan; the other is the current DAG for the
plan. Any resource r may be present in both DAGs. In order to produce a
snapshot, we need to merge these DAGs such that all resource
dependencies are correctly preserved. Conceptually, the merge proceeds
as follows:
- Begin with an empty merged DAG.
- For each resource r in the current DAG, insert r and its outgoing
edges into the merged DAG.
- For each resource r in the base DAG:
- If r is in the merged DAG, we are done: if the resource is in the
merged DAG, it must have been in the current DAG, which accurately
captures its current dependencies.
- If r is not in the merged DAG, insert it and its outgoing edges
into the merged DAG.
Physically, however, each DAG is represented as list of resources
without explicit dependency edges. In place of edges, it is assumed that
the list represents a valid topological sort of its source DAG. Thus,
any resource r at index i in a list L must be assumed to be dependent on
all resources in L with index j s.t. j < i. Due to this representation,
we implement the algorithm above as follows to produce a merged list
that represents a valid topological sort of the merged DAG:
- Begin with an empty merged list.
- For each resource r in the current list, append r to the merged list.
r must be in a correct location in the merged list, as its position
relative to its assumed dependencies has not changed.
- For each resource r in the base list:
- If r is in the merged list, we are done by the logic given in the
original algorithm.
- If r is not in the merged list, append r to the merged list. r
must be in a correct location in the merged list:
- If any of r's dependencies were in the current list, they must
already be in the merged list and their relative order w.r.t.
r has not changed.
- If any of r's dependencies were not in the current list, they
must already be in the merged list, as they would have been
appended to the list before r.
Prior to these changes, we had been performing these operations in
reverse order: we would start by appending any resources in the old list
that were not in the new list, then append the whole of the new list.
This caused out-of-order resources when a program that produced pending
deletions failed to run to completion.
Fixes#572.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.
The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.
Fixes#551.
Note: for the purposes of this discussion, archives will be treated as
assets, as their differences are not particularly meaningful.
Currently, the identity of an asset is derived from the hash and the
location of its contents (i.e. two assets are equal iff their contents
have the same hash and the same path/URI/inline value). This means that
changing the source of an asset will cause the engine to detect a
difference in the asset even if the source's contents are identical. At
best, this leads to inefficiencies such as unnecessary updates. This
commit changes asset identity so that it is derived solely from an
asset's hash. The source of an asset's contents is no longer part of
the asset's identity, and need only be provided if the contents
themselves may need to be available (e.g. if a hash does not yet exist
for the asset or if the asset's contents might be needed for an update).
This commit also changes the way old assets are exposed to providers.
Currently, an old asset is exposed as both its hash and its contents.
This allows providers to take a dependency on the contents of an old
asset being available, even though this is not an invariant of the
system. These changes remove the contents of old assets from their
serialized form when they are passed to providers, eliminating the
ability of a provider to take such a dependency. In combination with the
changes to asset identity, this allows a provider to detect changes to
an asset simply by comparing its old and new hashes.
This is half of the fix for [pulumi/pulumi-cloud#158]. The other half
involves changes in [pulumi/pulumi-terraform].
If a blob's reported size is incorrect, `archiveTar` may attempt to
write more bytes to an entry than it reported in that entry's header.
These changes provide a bit more context with the resulting error as
well as removing an unnecessary `LimitReader`.
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.
We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.
The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.
We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.
In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.
We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.
We currently support sending tracing data to a Zipkin-compatible endpoint. This has been validated with both Zipkin and Jaeger UIs.
We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface. There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
We currently have a nasty issue with archive assets wherein they read
their entire contents into memory each time they are accessed (e.g. for
hashing or translation). This interacts badly with scenarios that
place large amounts of data in an archive: aside from limiting the size
of an archive the engine can handle, it also bloats the engine's memory
requirements. This appears to have caused issues when running the PPC in
AWS: evidence suggests that the very high peak memory requirements this
approach implies caused high swap traffic that impacted the service's
availability.
In order to fix this issue, these changes move archives onto a
streaming read model. In order to read an archive, a user:
- Opens the archive with `Archive.Open`. This returns an ArchiveReader.
- Iterates over its contents using `ArchiveReader.Next`. Each returned
blob must be read in full between successive calls to
`ArchiveReader.Next`. This requirement is essentially forced upon us
by the streaming nature of TAR archives.
- Closes the ArchiveReader with `ArchiveReader.Close`.
This model does not require that the complete contents of the archive or
any of its constituent files are in memory at any given time.
Fixes#325.
In our existing code, we only use the input state for old and new
properties. This is incorrect and I'm astonished we've been flying
blind for so long here. Some resources require the output properties
from the prior operation in order to perform updates. Interestingly,
we did correclty use the full synthesized state during deletes.
I ran into this with the AWS Cloudfront Distribution resource,
which requires the etag from the prior operation in order to
successfully apply any subsequent operations.
We were previously calling configure on each package once per time it was mentioned in the config. We only need to call it once ever as we pass the full bag of relevent config through on that one call.
It's legal and possible for undefined properties to show up in
objects, since that's an idiomatic JavaScript way of initializing
missing properties. Instead of failing for these during deployment,
we should simply skip marshaling them to Terraform and let it do
its thing as usual. This came up during our customer workload.
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process
Fixes#504
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
This change fixes a couple bugs with assets:
* We weren't recursing into subdirectories in the new "path as
archive" feature, which meant we missed most of the files.
* We need to make paths relative to the root of the archive
directory itself, otherwise paths end up redundantly including
the asset's root folder path.
* We need to clean the file paths before adding them to the
archive asset map, otherwise they are inconsistent between the
path, tar, tgz, and zip cases.
* Ignore directories when traversing zips, since they aren't
included in the other formats.
* Tolerate io.EOF errors when reading the ZIP contents into blobs.
* Add test cases for the four different archive kinds.
This fixespulumi/pulumi-aws#50.
The prior code was a little too aggressive in rejected undefined
properties, because it assumed any occurrence indicated a resource
that was unavailable due to planning. This is a by-produt of our
relatively recent decision to flow undefineds freely during planning.
The problem is, it's entirely legitimate to have undefined values
deep down in JavaScript structures, entirely unrelated to resources
whose property values are unknown due to planning.
This change flows undefined more freely. There really are no
negative consequences of doing so, and avoids hitting some overly
aggressive assertion failures in some important scenarios. Ideally
we would have a way to know statically whether something is a resource
property, and tighten up the assertions just to catch possible bugs
in the system, but because this is JavaScript, and all the assertions
are happening at runtime, we simply lack the necessary metadata to do so.
This improves a few things about assets:
* Compute and store hashes as input properties, so that changes on
disk are recognized and trigger updates (pulumi/pulumi#153).
* Issue explicit and prompt diagnostics when an asset is missing or
of an unexpected kind, rather than failing late (pulumi/pulumi#156).
* Permit raw directories to be passed as archives, in addition to
archive formats like tar, zip, etc. (pulumi/pulumi#240).
* Permit not only assets as elements of an archive's member list, but
also other archives themselves (pulumi/pulumi#280).
Instead of doing the logic to see if a type has YAML tags and then
dispatching based on that to use either the direct go-yaml marshaller
or the one that works in terms of JSON tags, let's just say that we
always add YAML tags as well, and use go-yaml directly.
This change adds functions, `pulumi.getProject()` and `pulumi.getStack()`,
to fetch the names of the project and stack, respectively. These can be
handy in generating names, specializing areas of the code, etc.
This fixespulumi/pulumi#429.
During the course of a `pulumi update`, it is possible for a resource to
become slated for deletion. In the case that this deletion is part of a
replacement, another resource with the same URN as the to-be-deleted
resource will have been created earlier. If the `update` fails after the
replacement resource is created but before the original resource has been
deleted, the snapshot must capture that the original resource still exists
and should be deleted in a future update without losing track of the order
in which the deletion must occur relative to other deletes. Currently, we
are unable to track this information because the our checkpoints require
that no two resources have the same URN.
To fix this, these changes introduce to the update engine the notion of a
resource that is pending deletion and change checkpoint serialization to
use an array of resources rather than a map. The meaning of the former is
straightforward: a resource that is pending deletion should be deleted
during the next update.
This is a fairly major breaking change to our checkpoint files, as the
map of resources is no more. Happily, though, it makes our checkpoint
files a bit more "obvious" to any tooling that might want to grovel
or rewrite them.
Fixes#432, #387.
A dynamic resource is a resource whose provider is implemented alongside
the resource itself. This provider may close over and use orther
resources in the implementation of its CRUD operations. The provider
itself must be stateless, as each CRUD operation for a particular
dynamic resource type may use an independent instance of the provider.
Changes to the definition of a resource's provider result in replacement
of the resource itself (rather than a simple update), as this allows the
old provider definition to delete the old resource and the new provider
definition to create an appropriate replacement.
If a plugin fails to load after we've set up the goroutines that copy
from its std{out,err} streams, then those goroutines can end up writing
to a closed event channel. This change ensures that we properly drain
those streams in this case.
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.
From a user's point of view, there are a few changes:
1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).
On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
This changes a few things about "components":
* Rename what was previously ExternalResource to CustomResource,
and all of the related fields and parameters that this implies.
This just seems like a much nicer and expected name for what
these represent. I realize I am stealing a name we had thought
about using elsewhere, but this seems like an appropriate use.
* Introduce ComponentResource, to make initializing resources
that merely aggregate other resources easier to do correctly.
* Add a withParent and parentScope concept to Resource, to make
allocating children less error-prone. Now there's no need to
explicitly adopt children as they are allocated; instead, any
children allocated as part of the withParent callback will
auto-parent to the resource provided. This is used by
ComponentResource's initialization function to make initialization
easier, including the distinction between inputs and outputs.
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change. As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.
This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name. Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.
This resolvespulumi/pulumi#330.
`deploy.Plan.Apply` was only consumed by the engine, and seemed to be in
the wrong place given the API exported by the rest of `Plan` (i.e.
`Plan.Start` + `PlanIterator`). Furthermore, we were missing a reasonable
opportunity to share code between `update` and `preview`, both of which
need to walk the plan. These changes move the plan walk into `package engine`
as `planResult.Walk` and replace the `Progress` interface with a new interface,
`StepActions`, which subsumes the functionality of the former and adds support
for implementation-specific step execution. `planResult.Walk` is then
consumed by both `Engine.Deploy` and `Engine.PrintPlan`.
This change enables us to make progress on exposing data sources
(see pulumi/pulumi-terraform#29). The idea is to have an Invoke
function that simply takes a function token and arguments, performs
the function lookup and invocation, and then returns a return value.
Print "modified" rather than "modifyd". This introduces a new method,
`resource.StepOp.PastTense()`, which returns the past tense description
of the operation.
This change improves our output formatting by generally adding
fewer prefixes. As shown in pulumi/pulumi#359, we were being
excessively verbose in many places, including prefixing every
console.out with "langhost[nodejs].stdout: ", displaying full
stack traces for simple errors like missing configuration, etc.
Overall, this change includes the following:
* Don't prefix stdout and stderr output from the program, other
than the standard "info:" prefix. I experimented with various
schemes here, but they all felt gratuitous. Simply emitting
the output seems fine, especially as it's closer to what would
happen if you just ran the program under node.
* Do NOT make writes to stderr fail the plan/deploy. Previously
we assumed that any console.errors, for instance, meant that
the overall program should fail. This simply isn't how stderr
is treated generally and meant you couldn't use certain
logging techniques and libraries, among other things.
* Do make sure that stderr writes in the program end up going to
stderr in the Pulumi CLI output, however, so that redirection
works as it should. This required a new Infoerr log level.
* Make a small fix to the planning logic so we don't attempt to
print the summary if an error occurs.
* Finally, add a new error type, RunError, that when thrown and
uncaught does not result in a full stack trace being printed.
Anyone can use this, however, we currently use it for config
errors so that we can terminate with a pretty error message,
rather than the monstrosity shown in pulumi/pulumi#359.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
Instead of binding on 0.0.0.0 (which will listen on every interface)
let's only listen on localhost. On windows, this both makes the
connection Just Work and also prevents the Windows Firewall from
blocking the listen (and displaying UI saying it has blocked an
application and asking if the user should allow it)
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism. We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized. We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it. And in any case, the --parallel=P capability will be useful.
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources. We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case. Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.
This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106. I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways. This tends to be the way most tools (like Make) operate.
This fixespulumi/pulumi-fabric#335.
This change finishes the conversion of LUMIDL over to the new
runtime model, with the appropriate code generation changes.
It turns out the old model actually had a flaw in it anyway that we
simply didn't hit because we hadn't been stressing output properties
nearly as much as the new model does. This resulted in needing to
plumb the rejection (or allowance) of computed properties more
deeply into the resource property marshaling/unmarshaling logic.
As of these changes, I can run the GitHub provider again locally.
This change fixespulumi/pulumi-fabric#332.
This change upgrades gRPC to 1.6.0 to pick up a few bug fixes.
We also use the full address for gRPC endpoints, including the
interface name, as otherwise we pick the wrong interface on Linux.
There's a fair bit of clean up in here, but the meat is:
* Allocate the language runtime gRPC client connection on the
goroutine that will use it; this eliminates race conditions.
* The biggie: there *appears* to be a bug in gRPC's implementation
on Linux, where it doesn't implement WaitForReady properly. The
behavior I'm observing is that RPC calls will not retry as they
are supposed to, but will instead spuriously fail during the RPC
startup. To work around this, I've added manual retry logic in
the shared plugin creation function so that we won't even try
to use the client connection until it is in a well-known state.
pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
ideally, removing the work around.
The other minor things are:
* Separate run.js into its own module, so it doesn't include
index.js and do a bunch of random stuff it shouldn't be doing.
* Allow run.js to be invoked without a --monitor. This makes
testing just the run part of invocation easier (including
config, which turned out to be super useful as I was debugging).
* Tidy up some messages.
If a resource's planning operation is to do nothing, we can safely
assume that all of its properties are stable. This can be used during
planning to avoid cascading updates that we know will never happen.
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways. Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.
In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.
As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package. This fixespulumi/pulumi#293.
This change simplifies the provider RPC interface slightly:
1) Eliminate Get. We really don't need it anymore. There are
several possibly-interesting scenarios down the road that may
demand it, but when we get there, we can consider how best to
bring this back. Furthermore, the old-style Get remains mostly
incompatible with Terraform anyway.
2) Pass URNs, not type tokens, across the RPC boundary. This gives
the provider access to more interesting information: the type,
still, but also the name (which is no longer an object property).
This helps avoid running into file handle limits when creating archives including thousands of node_modules files.
Tracking a more complete fix through all other codepaths related to assets as part of #325.
This change ensures we close all Blobs in the asset/archive logic.
In particular, the archive.Read function returns a map of files to
Blobs and after we are done copying the contents we must ensure
that we invoke Close, otherwise we may leak file handles, sockets,
and so on. This may or may not be the culprit to the "too many
files open" errors we are hitting while deploying the M5 bits.
This changes a few things in the CLI, mostly just prettying it up:
* Label all steps more clearly with the kind of step. Also
unify the way we present this during planning and deployment.
* Summarize the changes that *did not* get made just as clearly
as those that did. In other words, stuff like this:
info: 2 resources changed:
+1 resource created
-1 resource deleted
5 resources unchanged
and
info: no resources required
5 resources unchanged
* Always print output properties when they are pertinent.
This includes creates, replacements, and updates.
* Show replacement creates and deletes very distinctly. The
create parts show up minty green and the delete parts show up
rosey red. These are the "physical" steps, compared to the
"logical" step of replacement (which remains marigold).
I still don't love where we are here. The asymmetry between
planning and deployment bugs me, and could be surprising.
("Hey, my deploy doesn't look like my plan!") I don't know
what developers will want to see here and I feel like in
general we are spewing far too much into the CLI to make it
even useful for anything but diagnosing failures afterwards.
I propose that we should do a deep dive on this during the
CLI epic, pulumi/pulumi-service#2.
This resolvespulumi/pulumi-fabric#305.
At the moment, we permit resources to carry a name, which the
engine uses as part of URN creation. Unfortunately, the property
"name" has a very high chance of conflicting with meaningful
user-authored properties. And furthermore this sort of name,
although key to creating URNs which are core to how the overall
system performs deployments and manages resources, are seldom
used programmatically in Lumi programs. As a result, it's a real
nuisance that we stole the good name.
This change renames that property from "name" to "urnName". This
not only has a lower likelihood of conflicting, but it also looks
reasonable sitting alongside the "urn" property. In fact, in some
future universe, after some of the upcoming runtime changes, we
may not even need the name property on the objects whatsoever.
I had originally toyed with the idea of eliminating the Resource
versus NamedResource distinction. This is certainly simpler and
remains a possibility. I didn't do that right now, however, because
the flexibility of letting resource providers name resources however
they see fit still seems possibly useful. For example, we keep
talking about whether functions can be auto-named based on hashes.
Until we've run those conversations to ground, I'd hate to do some
work that just needs to be undone in order to enable a scenario that
has a non-trivial likelihood of us wanting to explore.
We are now freely flowing computed and output properties across the
RPC boundary with providers. As such, we need to tolerate them in
a few more places. Namely, mapping to and from regular non-resource
property values, and also when copying RPC resource state back onto
live runtime objects.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly. This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs. This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.
This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
This change serializes unknown properties anywhere in the entire
property structure, including deeply embedded inside object maps, etc.
This is now done in such a way that we can recover both the computed
nature of the serialized property, along with its expected eventual
type, on the other side of the RPC boundary.
This will let us have perfect fidelity with the new bridge's view on
computed properties, rather than special casing them on "one side".
For Update and Delete operations, we provided just the input state
for a resource. This is insufficient, because the provider may need
to depend on output state from the Create or prior Update operations.
This change merges the output atop the input during the step application.
As part of the bridge bringup, I've discoverd that the property state
returned from Creates does *not* always equal the state that is then
read from calls to Get. (I suspect this is a bug and that they should
be equivalent, but I doubt it's fruitfal to try and track down all
occurrences of this; I bet it's widespread). To cope with this, we will
return state from Create and Update, instead of issuing a call to Get.
This was a design we considered to start with and frankly didn't have
a super strong reason to do it the current way, other than that it seemed
elegant to place all of the Get logic in one place.
Note that providers may choose to return nil, in which case we will read
state from the provider in the usual Get style.
This change mirrors the dynamic marshaling structure on the static
definition of the Asset and Archive types. This ensures that they
marshal correctly even when deeply embedded inside other structures.
This change brings the same typed serialization we use for RPC
to the serialization of deployments. This ensures that we get
repeatable diffs from one deployment to the next.
This reverts commit c3db70849d.
I've opted to take a new strategy to ensure the bridge properties
don't conflict (with manual renames), similar to the name property.
This change recognizes assets and archives as 1st class resource
property values. This is necessary to support them in the new bridge
work, and lays the foundation for fixing pulumi/lumi#153.
I also took the opportunity to clean up some old cruft in the
resource properties area.
This renames the basemost resource properties, id and urn, to
names that are less likely to conflict with properties that real
resources will want to use, pid and upn (provider ID and Universal
Pulumi Name, respectively).
I actually ran into this with the current bridge work. An alternative
solution would be to require derived resources to pick different names,
however this is unfortunate because usually they are more "user-facing"
than ours. Another alternative is to not hijack the object properties
at all, but that too is problematic because we use these properties
during the evaluation of plans and deployments.
This seems like a reasonable middle ground.
This adds a ReadLocations RPC function to the engine interface, alongside
the singular ReadLocation. The plural function takes a single token that
represents a module or class and we will then return all of the module
or class (static) properties that are currently known.
This adds a handy MapReplace function on pkg/resource's PropertyMap and
PropertyValue types. This is just like the existing Mappable function,
except that it permits easy replacement of elements as the map transformation
occurs. We need this to perform float64=>int transformations.
We fail very late in the process of plan application, should a duplicate
URN arise. This change fails as early in the process as possible and
ensures that it does so with good line number information.
This properly unwinds the interpreter should something happen that
results in cancellation. This occurs, for example, when the planning
engine encounters an error and decides that it doesn't need to proceed
further with evaluation before it simply goes ahead and exits.
This adds a few missing closes for the plugin host/context. This
should fixpulumi/lumi#261. Eventually when we have more robust
nightly test options, and want to spend the time, we should think
about doing more rigorous stress testing that kills processes at
inopportune times and guarantees we don't leak. I've filed
pulumi/lumi#263 to do that.
This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
We were not propagating the error from `deployLatest` through
to the CLI error result. Despite out recent efforts to integrate
gometalinter, there were also several additional similar cases of
ignored error results reported by `errcheck`. Not yet clear why
these are not being reported via gometalinter.
Fixes#262.
After 233c5a8 landed, I noticed there are a few things to be fixed up:
* Run gometalinter in all the right places. We need to run both in
lint and lint_quiet targets. I've also cleaned up some of the logic
around what to suppress so there's less repetition.
* We currently @ meaningful commands, which is unfortunate, since it
makes debugging Makefiles tough (especially when looking at CI build
logs). Going forward, we should only use @ for meaningless commands,
like @echo.
* The AWS project wasn't actually running tslint, because it needs to
say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
The current script of `tslint lib/aws/pack/...` wasn't actually
running lint, hence we missed a lot of AWS lint issues.
* Fix up the issues that these fixes uncovered. Mostly err shadowing.
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface. In summary:
* Instead of using the NullSource for destructions -- which
doesn't hook up an interpreter and so any reads of configuration
variables will fail -- we will enlighten the EvalSource to know
how to orchestrate destruction interpretation. The primary
difference is that we don't actually run the code, but *we do*
perform all of the necessary configuration and variable init.
* Associate the active interpreter with the plugin context as
we are executing, so that the host object can actually read the
state from the heap as requested to do so by attached plugins.
* Rename anything "engine" related to use the term "host"; this
avoids introducing unnecesarily new terminology.
* Add a new pkg/resource/provider/ package where we can begin
consolidating helper functionality for resource providers.
Right now, this includes a wrapper interface atop the gRPC
machinery necessary to contact the host, in addition to a
Main function that hides some boilerplate entrypoint code.
* Add a rpcutil.IsBenignCloseErr routine to let us ignore
"benign" gRPC errors that are knowingly returned at shutdown.
This commit completes pulumi/lumi#117.
This addresses CR feedback from @lukehoban; namely, that we should
be going through the Read API for location reads in the plugin host
to ensure that getters are invoked as appropriate.
I also made Location's various fields private so that we aren't
tempted to make this mistake elsewhere, effectively "forcing" us
to go through the accessor methods.
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
This change simplifies the generated Check interface for providers.
Instead of
Check(ctx context.Context, obj *T) ([]error, error)
where T is the resource type, we have
Check(ctx context.Context, obj *T, property string) error
This is done so that we can drive the calls to Check one property
at a time, allowing us to skip any that are computed. (Otherwise,
we may fail the verification erroneously.)
This has the added advantage that the Check implementations are
simpler and can simply return a single error. Furthermore, the
generated RPC code handles wrapping the result, so we can just do
return errors.New("bad");
rather than the previous reflection-laden junk
return resource.NewFieldError(
reflect.TypeOf(obj), awsservice.AWSResource_Property,
errors.New("bad"))
This change implements showing a summary of the current environment.
All you need to do is run
$ lumi env
and the current environment's information will be printed.
This makes it convenient to grab resource information that might be
required, for instance, to correlate with logs (e.g., lambda ARNs).
Eventually, as per pulumi/lumi#184, we want to print details about
all of the resources too.
Tests all of our commonly used examples.
Also sets test parallelism to 10 by default
since we are I/O bound on API calls to
the resource providers.
Also avoids using larger EC2 examples in
our samples so that we can keep our test
costs lower :-).
On the first turn, we want to distinguish between a coroutine
running that owns its turn, and a coroutine that knows it doesn't
own the turn and is simply awaiting its turn. The old Meet logic
wasn't quite right; instead, we'll have the caller tell us this.
We now have enough output properties implementation
working to change our API gateway examples and API
wrapper to correctly wire the API routes to the ARNs of
lambdas passed in to them.
We both wire up the lambda to the route, but also create
a permission specific to each route to assign to the
corresponding lambda - providing least privelege needed
for the API definition.
Also adds `string#toUpperCase` and fixes NewUniqueHex
to match how we are using it.
This change overwrites output property slots in runtime objects
after performing a CRUD operation, in addition to null or missing
slots, fixing #251. The problem is that we sometimes have output
property values pre-populated in an object, and sometimes don't,
depending on various things (both are legal). We should handle both.
The recent change to run the interpreter and planner on separate goroutines
created the need to perform rendezvous-style synchronization between them.
Although the case of an invoked function properly tore down the synchronization
by communicating the error, we seldom directly invoke functions for JavaScript
programs because the way module entrypoint code ends up in initializers.
This requires that we propagate errors correctly out of module and class
initializers, in the standard way, so that the unwind makes its way to the top.
This fixespulumi/lumi#246.
The primary purposes of this change is to mark only immediate ouptuts
on a resource object as "output" and categories the rest as computed.
It also contains a few minor things:
* Rebase atop the latest in master.
* Always marshal unknows as their default value.
* Permit computed as the existing ID property, in addition to null.
* Tidy up some asserts.
This change updates the ID/output propagation logic to properly handle
the case of replacements, in addition to accurately conveying the fact
that an update may change the values of output properties (but not the ID).
Also fixes a formatting issue with the replacement diffing displays.
This change introduces an OpSame planning step. The reason we need
this is so that we can apply the necessary output properties, including
the ID, even as we are simply walking the plan (i.e., when we aren't
actually performing a deployment). This ensures that the object state
evolves as required to let reads of output properties propagate in the
ways necessary to reproduce past executions of the program.
We need to run the post-construction hook *before* freezing an object's
readonly properties, since the hook will actually mutate the object in
the case of a deployment (it stores the output properties). In a sense,
this hook simply becomes an extension of the object's constructor.
* Assert new things in new places.
* Log more interesting tidbits during evaluation.
* Invoke the OnStart hook before triggering initializers.
* Tolerate nil prev snapshots during deletion calculation.
* Handle and serialize missing resource IDs as output props.
* Return "done" flag from Rendezvous.Meet.
This change refactors a number of aspects of the CLI's treatment of
steps, in line with the new scheme, and a number of other miscellaneous
and minor fixes. It also regenerates all RPC code impacted by recent renames.
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.
The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.
The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages. This results in:
- pkg/resource: just the core resource data structures.
- pkg/resource/deployment: all planning and deployment logic.
- pkg/resource/environment: all environment, configuration, and
serialized checkpoint structures and logic.
- pkg/resource/plugin: all dynamically loaded analyzer and
provider logic, including the actual loading and RPC mechanisms.
This also splits the resource abstraction up. We now have:
- resource.Resource: a shared interface.
- resource.Object: a resource that is connected to a live object
that will periodically observe mutations due to ongoing
evaluation of computations. Snapshots of its state may be
taken; however, this is purely a "pre-planning" abstraction.
- resource.State: a snapshot of a resource's state that is frozen.
In other words, it is no longer connected to a live object.
This is what will store provider outputs (ID and properties),
and is what may be serialized into a deployment record.
The branch is in a half-baked state as of this change; more changes
are to come...
LumiJS lambdas can now be serialized when they include calls to other LumiJS lambdas. The chain of lambda dependencies is jointly serialized into the target Lambda.
Also, LumiJS lambdas now include `node_modules` automatically in the AWS Lambda, ensuring the the runtime execution environment more closely matches the deployment time environment.
An early version of the gh-cicd example supporting #134 is added which uses these capabilities, currently including a mocked GitHub resource provider.
This change slightly refactors the way resources are created and
implemented. We now have two implementations of the Resource interface:
* `resource` (in resource_value.go), which is a snapshot of a resource's
state. All values are resolved and there is no live reference to any
heap state or objects. This will be used when serializing and/or
deserializing snapshots of deployments.
* `objectResource` (in resource_object.go), which is an implementation
of the Resource interface that wraps an underlying, live runtime object.
This currently introduces no functional difference, as fetching Inputs()
amounts to taking a snapshot of the full state. But this at least
gives us a leg to stand on in making sure that output properties are
read at the right times during evaluation.
This is a fundamental part of pulumi/lumi#90.
This change begins to track objects that are implicated in the
creation of computed values. This ultimately translates into the
resource URNs which are used during dependency analysis and
serialization. This is part of pulumi/lumi#90.
This change fixes the serialization of resource properties during
deployment checkpoints. We erroneously serialized empty arrays and
empty maps as though they were nil; instead, we want to keep them
serialized as non-nil empty collections, since the presence of a
value might be semantically meaningful. (We still skip nils.)
Also added some test cases.
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties. This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.
This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.
I believe this fixes both #189 and #198.
This change alters diag.Message to not format strings and, instead,
encourages developers to use the Infof, Errorf, and Warningf varargs
functions. It also tests that arguments are never interepreted as
format strings.
The AssumeRolePolicyDocument property returned by the AWS IAM GetRole API returns
a URL-encoded JSON string, so we need to decode this before JSON unmarshalling.
The Code property returned by AWS Lambda GetFunction provides a pre-signed S3 URL,
which changes on each call, and is of a different format to what is provided by the user.
For now, we'll not store this back into the Function object.
Add additional output properties to AWS Lambda Function that are stable values returned
from GetFunction.
Also corrects a gap where some property delete operations were not being correctly reported.
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior. This is required to write tests that mock
certain aspects, like loading packages from the filesystem.
In theory, this change incurs zero behavioral changes.
This change fixes up a few things so that updates correctly deal
with output properties. This involves a few things:
1) All outputs stored on the pre snapshot need to get propagated
to the post snapshot during planning at various points. This
ensures that the diffing logic doesn't need to be special cased
everywhere, including both the Lumi and the provider sides.
2) Names are changed to "input" properties (using a new `lumi` tag
option, `in`). These are properties that providers are expected
to know nothing about, which we must treat with care during diffs.
3) We read back properties, via Get, after doing an Update just like
we do after performing a Create. This ensures that if an update
has a cascading impact on other properties, it will be detected.
4) Inspecting a change, prior to updating, must be done using the
computed property set instead of the real one. This is to avoid
mutating the resource objects ahead of actually applying a plan,
which would be wrong and misleading.
This change remembers which properties were computed as outputs,
or even just read back as default values, during a deployment. This
information is required in the before/after comparison in order to
perform an intelligent diff that doesn't flag, for example, the absence
of "default" values in the after image as deletions (among other things).
As I was in here, I also cleaned up the way the provider interface
works, dealing with concrete resource types, making it feel a little
richer and less like we're doing in-memory RPC.
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary. The issues here are twofold:
1. Properties need to get unmapped using a JSON-tag-sensitive
marshaler, so that they are cased properly, etc. For that, we
have a new mapper.Unmap function (which is ultra lame -- see
pulumi/lumi#138).
2. We have the reverse problem with respect to resource IDs: on
the send side, we must translate from URNs (which the engine
knows about) and provider IDs (which the provider knows about);
similarly, then, on the receive side, we must translate from
provider IDs back into URNs.
As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors. After this change, we will only flow if
--logflow is passed, e.g. as in
$ lumi --logtostderr --logflow -v=9 deploy ...
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so. As a result, this change takes an initial
whack at implementing Get on all existing AWS resources. The Get API needs
to return a fully populated structure containing all inputs and outputs.
Believe it or not, this is actually part of pulumi/lumi#90.
This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.
This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties. For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
This change introduces the notion of a computed versus an output
property on resources. Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan. Specifically:
* An output property is any property that is populated by the resource
provider, not code running in the Lumi type system. Because these
values aren't available during planning -- since we have not yet
performed the deployment operations -- they will be latent values in
our runtime and generally missing at the time of a plan. This is no
problem and we just want to avoid marshaling them in inopportune places.
* A computed property, on the other hand, is a different beast altogehter.
Although true one of these is missing a value -- by virtue of the fact
that they too are latent values, bottoming out in some manner on an
output property -- they will appear in serializable input positions.
Not only must we treat them differently during the RPC handshake and
in the resource providers, but we also want to guarantee they are gone
by the time we perform any CRUD operations on a resource. They are
purely a planning-time-only construct.
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.
In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways. Namely,
operations against Latent<T>s generally produce new Latent<U>s.
During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values. An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange. As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values. My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.
For now, using an unresolved Latent<T> in a conditional will lead
to an error. See pulumi/lumi#67. Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support. That is a missing 1/3.
Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values. Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels. I've
found this handy for debugging purposes.
Resolves#137.
This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.
A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.
LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.
Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
The storing of output properties won't work correctly until pulumi/lumi#90
is completed. The update logic sees properties that weren't supplied by the
developer and thinks this means an update is required; this is easy to fix
but better to just roll into the overall pending change that will land soon.
We need a stable object key enumeration order and we might as well leverage
ECMAScript's definition for this. As of ES6, key ordering is specified; see
https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys.
I haven't fully implemented the "numbers come first part" (we can do this as
soon as we have support for Object.keys()), but the chronological part works.
Our initial implementation of assets was intentionally naive, because
they were limited to single-file assets. However, it turns out that for
real scenarios (like lambdas), we want to support multi-file assets.
In this change, we introduce the concept of an Archive. An archive is
what the term classically means: a collection of files, addressed as one.
For now, we support three kinds: tarfile archives (*.tar), gzip-compressed
tarfile archives (*.tgz, *.tar), and normal zipfile archives (*.zip).
There is a fair bit of library support for manipulating Archives as a
logical collection of Assets. I've gone to great length to avoid making
copies, however, sometimes it is unavoidable (for example, when sizes
are required in order to emit offsets). This is also complicated by the
fact that the AWS libraries often want seekable streams, if not actual
raw contiguous []byte slices.
Unfortunately, this wasn't a great name. The old one stunk, but the
new one was misleading at best. The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them. InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.
I've been kicking the tires with this locally enough to checkpoint the
current version. There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
In order to support output properties (pulumi/coconut#90), we need to
modify the Create gRPC interface for resource providers slightly. In
addition to returning the ID, we need to also return any properties
computed by the AWS provider itself. For instance, this includes ARNs
and IDs of various kinds. This change simply propagates the resources
but we don't actually support reading the outputs just yet.
This change renames two provider methods:
* Read becomes Get.
* UpdateImpact becomes PreviewUpdate.
These just read a whole lot nicer than the old names.
The order of operations for stderr/stdout monitoring with plugins
managed to hide some important errors. For example, if something
was written to stderr *before* the port was parsed from stdout, a
very possible scenario if the plugin fails before it has even
properly sarted, then we would silently drop the stderr on the floor
leaving behind no indication of what went wrong. The new ordering
ensures that stderr is never ignored with some minor improvements
in the case that part of the port is parsed from stdout but it
ultimately ends in an error (e.g., if an EOF occurs prematurely).
This change includes a first basic whack at implementing S3 bucket
objects. It leverages the assets infrastructure put in place in the
last commit, supporting uploads from text, files, or arbitrary URIs.
Most of the interesting object properties remain unsupported for now,
but with this we can upload and delete basic S3 objects, sufficient
for a lot of the lambda functions management we need to implement.
This change introduces the basic concept of assets. It is far from
fully featured, however, it is enough to start adding support for various
storage kinds that require access to I/O-backed data (files, etc).
The challenge is that Coconut is deterministic by design, and so you
cannot simply read a file in an ad-hoc manner and present the bytes to
a resource provider. Instead, we will model "assets" as first class
entities whose data source is described to the system in a more declarative
manner, so that the system and resource providers can manage them.
There are three ways to create an asset at the moment:
1. A constant, in-memory string.
2. A path to a file on the local filesystem.
3. A URI, whose scheme is extensible.
Eventually, we want to support byte blobs, but due to our use of a
"JSON-like" type system, this isn't easily expressible just yet.
The URI scheme is extensible in that file://, http://, and https://
are supported "out of the box", but individual providers are free to
recognize their own schemes and support them. For instance, copying
one S3 object to another will be supported simply by passing a URI
with the s3:// protocol in the usual way.
Many utility functions are yet to be written, but this is a start.
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one. As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
This change adds the ability to specify analyzers in two ways:
1) By listing them in the project file, for example:
analyzers:
- acmecorp/security
- acmecorp/gitflow
2) By explicitly listing them on the CLI, as a "one off":
$ coco deploy <env> \
--analyzer=acmecorp/security \
--analyzer=acmecorp/gitflow
This closes out pulumi/coconut#119.
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119. In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource. The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules). These run simultaneous to overall checking.
Analyzers are loaded as plugins just like providers are. The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.
This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
This changes a few naming things:
* Rename "husk" to "environment" (`coco env` for short).
* Rename NutPack/NutIL to CocoPack/CocoIL.
* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.
* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.
* Rename the package asset directory from nutpack/ to .coconut/.
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties. This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late). My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.
This resolvespulumi/coconut#107, although we aren't using this
in any resource providers just yet. I'll add a work item now for that...
This change is mostly just a rename of Moniker to URN. It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.
This is a minor step that helps to prepare us for pulumi/coconut#109.
This change adds a new resource.NewUniqueHex API, that simply generates
a unique hex string with the given prefix, with a specific count of
random bytes, and optionally capped to a maximum length.
This is used in the AWS SecurityGroup resource provider to avoid name
collisions, which is especially important during replacements (otherwise
we cannot possibly create a new instance before deleting the old one).
This resolvespulumi/coconut#108.
This change tracks which updates triggered a replacement. This enables
better output and diagnostics. For example, we now colorize those
properties differently in the output. This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
This change, part of pulumi/coconut#105, rearranges support for
resource replacement. The old model didn't properly account for
the cascading updates and possible replacement of dependencies.
Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion. This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step. The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps). It also makes it
easier to fold all of the edges into a single linchpin node.
As part of this, the update step no longer gets to choose whether
to recreate the resource. Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
This change introduces a new RPC function to the provider interface;
in pseudo-code:
UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
(bool, PropertyMap, error)
Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update. This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).
The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect). This ensures the overall DAG / schedule is ordered correctly.
This change is most of pulumi/coconut#105. The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID. This change implements support for remembering/recovering them.
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new. This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
This change does a few things:
* First and foremost, it tracks configuration variables that are
initialized, and optionally prints them out as part of the
prelude/header (based on --show-config), both in a dry-run (plan)
and in an actual deployment (apply).
* It tidies up some of the colorization and messages, and includes
nice banners like "Deploying changes:", etc.
* Fix an assertion.
* Issue a new error
"One or more errors occurred while applying X's configuration"
just to make it easier to distinguish configuration-specific
failures from ordinary ones.
* Change config keys to tokens.Token, not tokens.ModuleMember,
since it is legal for keys to represent class members (statics).
This change adds support for configuration maps.
This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation. There is nothing special with this
code as it could have been authored by a user. But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).
For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2. From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":
prod-west.json:
{
"husk": "prod-west",
"config": {
"aws:config:region": "us-west-1"
}
}
prod-east.json:
{
"husk": "prod-east",
"config": {
"aws:config:region": "us-east-2"
}
}
Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents. As a result, the static variable
"region" in the "aws:config" package will have the desired value.
This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns. Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
* Delete husks if err == nil, not err != nil.
* Swizzle the formatting padding on array elements so that the
diff modifier + or - binds more tightly to the [N] part.
* Print the un-doubly-indented padding for array element headers.
* Add some additional logging to step application (it helped).
* Remember unchanged resources even when glogging is off.
This change adds a --show-sames flag to `coco husk deploy`. This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
This change checkpoints deployments properly. That is, even in the
face of partial failure, we should keep the huskfile up to date. This
accomplishes that by tracking the state during plan application.
There are still ways in which this can go wrong, however. Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit. This
makes targets ("husks") more of a first class thing.
Namely, you must first initialize a husk before using it:
$ coco husk init staging
Coconut husk 'staging' initialized; ready for deployments
Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments. The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:
$ ... make some changes ...
$ coco husk deploy staging
... standard deployment progress spew ...
Finally, should you want to teardown an entire environment:
$ coco husk destroy staging
... standard deletion progress spew for all resources ...
Coconut husk 'staging' has been destroyed!
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update. This simplifies the management of deployment
records, checkpoints, and snapshots.
I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming). The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:
nutpack/
bin/
Nutpack.json
... any other compiled artifacts ...
husks/
... one snapshot per husk ...
For example, if we had a stage and prod husk, we would have:
nutpack/
bin/...
husks/
prod.json
stage.json
In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment. These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.
The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
* Eliminate some superfluous "\n"s.
* Remove the redundant properties stored on AWS resources.
* Compute array diff lengths properly (+1).
* Display object property changes from null to non-null as
adds; and from non-null to null as deletes.
* Fix a boolean expression from ||s to &&s. (Bone-headed).
* Print a pretty message if the plan has nothing to do:
"info: nothing to do -- resources are up to date"
* Add an extra validation step after reading in a snapshot,
so that we detect more errors sooner. For example, I've
fed in the wrong file several times, and it just chugs
along as though it were actually a snapshot.
* Skip printing nulls in most plan outputs. These just
clutter up the output.
This change detects duplicate object names (monikers) and issues a nice
error message with source context include. For example:
index.ts(260,22): error MU2006: Duplicate objects with the same name:
prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group
The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:
let group1 = new aws.ec2.SecurityGroup("group", { ... });
let group2 = new aws.ec2.SecurityGroup("group", { ... });
^^^^^^^^^^^^^^^^^^^^^^^^^
This change overhauls the way we do object monikers. The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions. The new approach mixes some amount of "automatic scoping"
plus some "explicit naming." Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.
Each moniker has four parts:
<Namespace>::<AllocModule>::<Type>::<Name>
wherein each element is the following:
<Namespace> The namespace being deployed into
<AllocModule> The module in which the object was allocated
<Type> The type of the resource
<Name> The assigned name of the resource
The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.
The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created. In the future, we may wish to
extend this to also track the module under evaluation. (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)
The <Name> warrants more discussion. The resource provider is consulted
via a new gRPC method, Name, that fetches the name. How the provider
does this is entirely up to it. For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).
This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
This change is a first whack at implementing updates.
Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order. For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies). These are just special cases of the more
general idea of performing DAG operations in dependency order.
Updates must work in terms of this more general notion. For example:
* It is an error to delete a resource while another refers to it; thus,
resources are deleted after deleting dependents, or after updating
dependent properties that reference the resource to new values.
* It is an error to depend on a create a resource before it is created;
thus, resources must be created before dependents are created, and/or
before updates to existing resource properties that would cause them
to refer to the new resource.
Of course, all of this is tangled up in a graph of dependencies. As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.
To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it. In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error. This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
This change adds custom serialization and deserialization to preserve
the ordering of resources in snapshot files. Unfortunately, because
Go maps are unordered, the results are scrambled (actually, it turns out,
Go will sort by the key). An alternative would be to resort the results
after reading in the file, or storing as an array, but this would be a
change to the MuGL file format and is less clear than simply keying each
resource by its moniker as we are doing today.
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
This change adds support for serializing snapshots in MuGL, per the
design document in docs/design/mugl.md. At the moment, it is only
exposed from the `mu plan` command, which allows you to specify an
output location using `mu plan --output=file.json` (or `-o=file.json`
for short). This serializes the snapshot with monikers, resources,
and so on. Deserialization is not yet supported; that comes next.
* Specify MinCount/MaxCount when creating an EC2 instance. These
are required properties on the RunInstances API.
* Only attempt to unmarshal egressArray/ingressArray when non-nil.
* Remember the context object on the instanceProvider.
* Move the moniker and object maps into the shared context object.
* Marshal object monikers as the resource IDs to which they refer,
since monikers are useless on "the other side" of the RPC boundary.
This ensures that, for example, the AWS provider gets IDs it can use.
* Add some paranoia assertions.
This commit includes a basic AWS resource provider. Mostly it is just
scaffolding, however, it also includes prototype implementations for EC2
instance and security group resource creation operations.
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
This moves us one step closer from planning to application (marapongo/mu#21).
Namely, we now drive the right resource provider operations in response to
the plan's steps. Those providers, however, are still empty shells.
This change adds a flag to `plan` so that we can create deletion plans:
$ mu plan --delete
This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
This change introduces object monikers. These are unique, serializable
names that refer to resources created during the execution of a MuIL
program. They are pretty darned ugly at the moment, but at least they
serve their desired purpose. I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being. The names right now are
particularly sensitive to simple refactorings.
This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable. I'd prefer to do that with a bit of
experience under our belts first.
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.
It contains the following key abstractions:
* resource.Provider is a wrapper around the CRUD operations exposed by
underlying resource plugins. It will eventually defer to resource.Plugin,
which itself defers -- over an RPC interface -- to the actual plugin, one
per package exposing resources. The provider will also understand how to
load, cache, and overall manage the lifetime of each plugin.
* resource.Resource is the actual resource object. This is created from
the overall evaluation object graph, but is simplified. It contains only
serializable properties, for example. Inter-resource references are
translated into serializable monikers as part of creating the resource.
* resource.Moniker is a serializable string that uniquely identifies
a resource in the Mu system. This is in contrast to resource IDs, which
are generated by resource providers and generally opaque to the Mu
system. See marapongo/mu#69 for more information about monikers and some
of their challenges (namely, designing a stable algorithm).
* resource.Snapshot is a "snapshot" taken from a graph of resources. This
is a transitive closure of state representing one possible configuration
of a given environment. This is what plans are created from. Eventually,
two snapshots will be diffable, in order to perform incremental updates.
One way of thinking about this is that a snapshot of the old world's state
is advanced, one step at a time, until it reaches a desired snapshot of
the new world's state.
* resource.Plan is a plan for carrying out desired CRUD operations on a target
environment. Each plan consists of zero-to-many Steps, each of which has
a CRUD operation type, a resource target, and a next step. This is an
enumerator because it is possible the plan will evolve -- and introduce new
steps -- as it is carried out (hence, the Next() method). At the moment, this
is linearized; eventually, we want to make this more "graph-like" so that we
can exploit available parallelism within the dependencies.
There are tons of TODOs remaining. However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.
This is part of marapongo/mu#38 and marapongo/mu#41.
This change moves the RPC definitions to the pkg/murpc package,
underneath the proto folder. This is to ensure that the resulting
Protobufs get a good package name "murpc" that is unique within the
overall toolchain. It also includes a `generate.sh` script that
can be used to manually regenerate client/server code. Finally,
we are actually checking in the generated files underneath pkg/murpc.
This change deletes a bunch of old legacy code. I've tagged the
prior changelist as "0.1" in case we need to go back and recover
some of the code (as I expect some of the constraint type logic
and AWS-specific code to come in handy down the road).
🔥🔥🔥
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level. In fact, the whole code-generation
phase of the compiler was based on it.
In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.
Therefore, most of this stuff can go. I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision. For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.
I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc). But for the time being, it's a distraction
getting the core model running. I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers. (Although we won't be using
CloudFormation, some of the name generation code might be useful.) So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.