This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
Parallelize collection of logs at each layer of the resource tree walk. Since almost all cost of log collection is at leaf nodes, this should allow the total time for log collection to be close to the time taken for the single longest leaf node, instead of the sum of all leaf nodes.
Outside of `.pulumiignore` we support a few "default" excludes that
try to push folks towards a pit of succes.
Previously, there was no way to opt out of these, which would be bad
if our huristics caused something youto really care about to be
elided. With this change, we add an optional setting in Pulumi.yaml
that allows you to opt out of this behavior.
As part of the work, I changed .git to be one of these "default"
excludes instead of it only happening if you had a .pulumiignore file
in a directory
When working on the `.pulumiignore` support, I added a small command
that would just dump the file we sent to the service (without actually
talking to the service) so I could do some measurements on archive
sizes.
I mentioned this to Chris who said he had hacked up a similar thing
for something he was working on, so it felt worthwhile to check this
command in (hidden behind a flag) to share the implementation.
This command is hidden behind an environment variable, so customers
will not see it unless they opt in.
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.
This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs. That resources has properties for each top-level export in the Node.js script.
Running `pulumi stack` will display the current value of these outputs.
A handful of UX improvments for config:
- `pulumi config ls` has been removed. Now, `pulumi config` with no
arguments prints the table of configuration values for a stack and
a new command `pulumi config get <key>` prints the value for a
single configuration key (useful for scripting).
- `pulumi config text` and `pulumi config secret` have been merged
into a single command `pulumi config set`. The flag `--secret` can
be used to encrypt the value we store (like `pulumi config secret`
used to do).
- To make it obvious that setting a value with `pulumi config set` is
in plan text, we now echo a message back to the user saying we
added the configuration value in plaintext.
Fixes#552
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)
Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.
The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.
The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
This is the smallest possible thing that could work for both the local
development case and the case where we are targeting the Pulumi
Service.
Pulling down the pulumiframework and component packages here is a bit
of a feel bad, but we plan to rework the model soon enough to a
provider model which will remove the need for us to hold on to this
code (and will bring back the factoring where the CLI does not have
baked in knowledge of the Pulumi Cloud Framework).
Fixes#527
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.
The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.
Fixes#551.
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.
We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.
The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.
We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.
In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.
We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.
We currently support sending tracing data to a Zipkin-compatible endpoint. This has been validated with both Zipkin and Jaeger UIs.
We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface. There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
This code was swallowing an error incorrectly, rather than returning
it. As a result, certain commands would fail with assertions rather
than the intended error message (like trying to config without a stack).
Rework the polling code for `pulumi` when targeting hosted scenarios. (i.e. absorb https://github.com/pulumi/pulumi-service/pull/282.) We now return an `updateID` for update, preview, and destroy. And use bake that into the URL.
This PR also silently ignores 504 errors which are now returned from the Pulumi Service if the PPC request times out.
Combined, this should ameliorate some of the symptoms we see from https://github.com/pulumi/pulumi-ppc/issues/60. Though we'll continue to try to identify and fix the root cause.
Previously, we stored configuration information in the Pulumi.yaml
file. This was a change from the old model where configuration was
stored in a special section of the checkpoint file.
While doing things this way has some upsides with being able to flow
configuration changes with your source code (e.g. fixed values for a
production stack that version with the code) it caused some friction
for the local development scinerio. In this case, setting
configuration values would pend changes to Pulumi.yaml and if you
didn't want to publish these changes, you'd have to remember to remove
them before commiting. It also was problematic for our examples, where
it was not clear if we wanted to actually include values like
`aws:config:region` in our samples. Finally, we found that for our
own pulumi service, we'd have values that would differ across each
individual dev stack, and publishing these values to a global
Pulumi.yaml file would just be adding noise to things.
We now adopt a hybrid model, where by default configuration is stored
locally, in the workspace's settings per project. A new flag `--save`
tests commands to actual operate on the configuration information
stored in Pulumi.yaml.
With the following change, we have have four "slots" configuration
values can end up in:
1. In the Pulumi.yaml file, applies to all stacks
2. In the Pulumi.yaml file, applied to a specific stack
3. In the local workspace.json file, applied to all stacks
4. In the local workspace.json file, applied to a specific stack
When computing the configuration information for a stack, we apply
configuration in the above order, overriding values as we go
along.
We also invert the default behavior of the `pulumi config` commands so
they operate on a specific stack (i.e. how they did before
e3610989). If you want to apply configuration to all stacks, `--all`
can be passed to any configuration command.
This change introduces an abstraction for a `backend` which manages
the implementation of some CLI commands. As part of defining the
interface, we introduce a new local backend implementation that just
uses data local to the machine.
This will let us share argument parsing and some display information
between the local case and the pulumi.com case in the CLI. We can
continue to refine this interface over time (e.g. today we have the
implementation of the Destroy/Update/Preview actually writing output
but instead they should be returning strongly typed data that the CLI
knows how to display and is unified across Pulumi.com deploys and
local deploys).
But this is a good first step.
Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.
The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.
This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.
This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.
As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
The last status message from the PPC doesn't include a newline. So the `pulumi` CLI renders any error messages on the same line as the last diagnostic message. Not ideal.
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
- `pulumi config ls` now does not prompt for a passphrase if there are
secrets, instead ******'s are shown. `--show-secrets` can be passed
to force decryption. The behavior of `pulumi config ls <key>` is
unchanged, if the key is secure, we will prompt for a passphrase.
- `pulumi config secret <key>` now prompts for the passphrase and verifies
it before asking for the secret value.
Fixes#465
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.
I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)
Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
Fixes panic when the output from the PPC doesn't have a "text" property. (Still need to unify the way the PPC emits event data with the types that the Pulumi codebase uses internally.)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).
As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.
We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow.
Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
When `PULUMI_RETAIN_CHECKPOINTS` is set in the environment, also write
the checkpoint file to <stack-name>.<ext>.<timestamp>.
This ensures we have historical information about every snapshot, which
would aid in debugging issues like #451. We set this to true for our
integration tests.
Fixes#453
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
This improves a few things about assets:
* Compute and store hashes as input properties, so that changes on
disk are recognized and trigger updates (pulumi/pulumi#153).
* Issue explicit and prompt diagnostics when an asset is missing or
of an unexpected kind, rather than failing late (pulumi/pulumi#156).
* Permit raw directories to be passed as archives, in addition to
archive formats like tar, zip, etc. (pulumi/pulumi#240).
* Permit not only assets as elements of an archive's member list, but
also other archives themselves (pulumi/pulumi#280).
The existing `SnapshotProvider` interface does not sufficiently lend
itself to reliable persistence of snapshot data. For example, consider
the following:
- The deployment engine creates a resource
- The snapshot provider fails to save the updated snapshot
In this scenario, we have no mechanism by which we can discover that the
existing snapshot (if any) does not reflect the actual state of the
resources managed by the stack, and future updates may operate
incorrectly. To address this, these changes split snapshotting into two
phases: the `Begin` phase and the `End` phase. A provider that needs to
be robust against the scenario described above (or any other scenario
that allows for a mutation to the state of the stack that is not
persisted) can use the `Begin` phase to persist the fact that there are
outstanding mutations to the stack. It would then use the `End` phase to
persist the updated snapshot and indicate that the mutation is no longer
outstanding. These steps are somewhat analogous to the prepare and
commit phases of two-phase commit.
I sometimes revert back to some ancient version of the system, and
I figure with so many other tools using different verbs here, it's
worth at least improving our help text with the SuggestFors.
The change to use a Goroutine for pumping output causes a hang
when an error occurs. This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur. This changes the logic to only wait
on the channel if we successfully began the operation in question.
Previously, config information was stored per stack. With this change,
we now allow config values which apply to every stack a program may
target.
When passed without the `-s <stack>` argument, `pulumi config`
operates on the "global" configuration. Stack specific information can
be modified by passing an explicit stack.
Stack specific configuration overwrites global configuration.
Conside the following Pulumi.yaml:
```
name: hello-world
runtime: nodejs
description: a hello world program
config:
hello-world:config:message Hello, from Pulumi
stacks:
production:
config:
hello-world:config:message Hello, from Production
```
This program contains a single configuration value,
"hello-world:config:message" which has the value "Hello, from Pulumi"
when the program is activated into any stack except for "production"
where the value is "Hello, from Production".
Instead of having information stored in the checkpoint file, save it
in the Pulumi.yaml file. We introduce a new section `stacks` which
holds information specific to a stack.
Next, we'll support adding configuration information that applies
to *all* stacks for a Program and allow the stack specific config to
overwrite or augment it.
This PR adds `login` and `logout` commands to the `pulumi` CLI.
Rather than requiring a user name and password like before, we instead require users to login with GitHub credentials on the Pulumi Console website. (You can do this now via https://beta.moolumi.io.) Once there, the account page will show you an "access token" you can use to authenticate against the CLI.
Upon successful login, the user's credentials will be stored in `~/.pulumi/credentials.json`. This credentials file will be automatically read with the credentials added to every call to `PulumiRESTCall`.
We use `git describe --tags` to construct a version number based on
the current version tag.
The properties VERSION (when using make) and Version (when using
MSBuild) can be explicitly set to use a fixed value instead.
Fixes#13
Previously, you had to fully qualify configuration values (e.g
example:config:message). As a convience, let's support adding
configuration values where the key is not a fully qualified module
member. In this case, we'll treat the key as if
`<program-name>:config:` had been prepended to it.
In addition, when we print config, shorten keys of the form
`<program-name>:config:<key-name>` to `<key-name>`.
I've updated one integration test to use the new syntax and left the
other as is to ensure both continue to work.
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.
From a user's point of view, there are a few changes:
1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).
On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
Our checkpoint file is now in a shape such that go's built in
marshalling and unmarshalling is sufficent, so we can remove the code
we had on our decode path to do extra validation
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
Internally we end up using flag to parse our command lines because of
our dependency on glog. When flag does not know about -C, if someone
passes -C before the sub command name (as is common for folks coming
from Git) the flag package terminates the program because -C was not
defined. So just teach flag this exists until we takle #301 and rid
ourselves of glog.
Previously, you could pass an explicit path to a Pulumi program when
running preview or update and the tool would use that program when
planning or deploying, but continue to write state in the cwd. While
being able to operate on a specific package without having to cd'd all
over over the place is nice, this specific implemntation was a little
scary because it made it easier to run two different programs with the
same local state (e.g config and checkpoints) which would lead to
surprising results.
Let's move to a model that some tools have where you can pass a
working directory and the tool chdir's to that directory before
running. This way any local state that is stored will be stored
relative to the package we are operating on instead of whatever the
current working directory is.
Fixes#398
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)
The CLI is now responsible for actually displaying information and the
engine is only concerned with getting the configuration. As part of
this change, I've removed the display a single configuration value API
from the engine. It can now be done in terms of getting all the config
for an environment and selecting the value the user is interested in
Previously the engine was concerned with displaying information about
the environment. Now the engine returns an environment info object
which the CLI uses to display environment information.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism. We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized. We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it. And in any case, the --parallel=P capability will be useful.
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources. We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case. Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.
This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106. I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways. This tends to be the way most tools (like Make) operate.
This fixespulumi/pulumi-fabric#335.
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.
In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.
As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package. This fixespulumi/pulumi#293.
The existing implementation of the interface (backed by the file
system) has moved into cmd/lumi. The deployment service will start to
provide its own version.
`saveEnv` had a flag which would prevent an environment from being
overwritten if it already existed, which was only used by `lumi env
init`. Refactor the code so the check is done inside `lumi` instead of
against this API. We don't need this functionality for the service and
so requiring support for this at the API boundary for environments
feels like a bad idea.
We'd like to abstract out environment CRUD operations and I'd prefer
not to have to bake in the conspect of a file name like thing in the
abstraction. Since we were not really using this feature many places,
let's just get rid of it.
This refactors the engine so all of the APIs on it are instance
methods on the type instead of raw methods that float around and use
data from a global engine.
A mechcanical change as we remove the global `E` and then make
anything that interacted with that in pkg/engine to be an instance
method and the dealing with the fallout.
Prevously, we would throw raw args arrays across the interface and the
engine would do some additional parsing. Clean this up so we don't do
that and all the parsing stays in `lumi`
The plan is to take all the logic that actually implements the
commands exposed by `lumi` into a helper type that can be used by both
`lumi` and the Pulumi Service. This is step one, which was to decouple
the implementation of these commands from the command line parsing and
CLI interface they are presented to the user from.
We were passing along the entire args array to the implementation of
most commands, but the only place this was used was to pass one piece
of information to the compiler we create in one case. Let's get
explicit about the stuff we share from the CLI layer into the
implementation of the commands and make this stuff well typed instead
of a bag of strings.
Just use the args local directly instead of using the reference from
envCmdInfo. Doing this will make it easier to remove the Args field of
envCmdInfo, which I want to refactor to be more specific to the
boundary between the CLI and Planning/Deploying.
This changes a few things in the CLI, mostly just prettying it up:
* Label all steps more clearly with the kind of step. Also
unify the way we present this during planning and deployment.
* Summarize the changes that *did not* get made just as clearly
as those that did. In other words, stuff like this:
info: 2 resources changed:
+1 resource created
-1 resource deleted
5 resources unchanged
and
info: no resources required
5 resources unchanged
* Always print output properties when they are pertinent.
This includes creates, replacements, and updates.
* Show replacement creates and deletes very distinctly. The
create parts show up minty green and the delete parts show up
rosey red. These are the "physical" steps, compared to the
"logical" step of replacement (which remains marigold).
I still don't love where we are here. The asymmetry between
planning and deployment bugs me, and could be surprising.
("Hey, my deploy doesn't look like my plan!") I don't know
what developers will want to see here and I feel like in
general we are spewing far too much into the CLI to make it
even useful for anything but diagnosing failures afterwards.
I propose that we should do a deep dive on this during the
CLI epic, pulumi/pulumi-service#2.
This resolvespulumi/pulumi-fabric#305.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly. This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs. This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.
This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
Report an error when Lumi runtime compilation fails.
Also adds a reusable install_release.sh script to use
for installing Lumi package releases, plus expansion
of symlinks in package Makefiles.
This change brings the same typed serialization we use for RPC
to the serialization of deployments. This ensures that we get
repeatable diffs from one deployment to the next.
This change introduces a --debug option to the plan, deploy, and
destroy commands. Unlike --logtostderr, which merely hooks into the
copious Glogging that we perform (and is therefore meant for developers
of the tools themselves and not end users), --debug hooks into the
user-facing debug stream. This now includes any debug messages coming
from the resource providers as they perform their tasks.
We would like to allow developers to use async/await
on the inside (Node.js) of Lumi programs.
We now support (don't error on) usage of async/await
inside runtime callbacks in Lumi programs. If await is
used during deployment, it will trigger an error.
Also adds support for try/catch in LumiJS, as these are
used more heavily in async/await code.
Since we target Node.js environments without native support
for async/await, we also emit runtime helpers to support TS
transpilation of async/await for Node.js pre-7.6.
This adds a few missing closes for the plugin host/context. This
should fixpulumi/lumi#261. Eventually when we have more robust
nightly test options, and want to spend the time, we should think
about doing more rigorous stress testing that kills processes at
inopportune times and guarantees we don't leak. I've filed
pulumi/lumi#263 to do that.
This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
We were not propagating the error from `deployLatest` through
to the CLI error result. Despite out recent efforts to integrate
gometalinter, there were also several additional similar cases of
ignored error results reported by `errcheck`. Not yet clear why
these are not being reported via gometalinter.
Fixes#262.
After 233c5a8 landed, I noticed there are a few things to be fixed up:
* Run gometalinter in all the right places. We need to run both in
lint and lint_quiet targets. I've also cleaned up some of the logic
around what to suppress so there's less repetition.
* We currently @ meaningful commands, which is unfortunate, since it
makes debugging Makefiles tough (especially when looking at CI build
logs). Going forward, we should only use @ for meaningless commands,
like @echo.
* The AWS project wasn't actually running tslint, because it needs to
say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
The current script of `tslint lib/aws/pack/...` wasn't actually
running lint, hence we missed a lot of AWS lint issues.
* Fix up the issues that these fixes uncovered. Mostly err shadowing.
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface. In summary:
* Instead of using the NullSource for destructions -- which
doesn't hook up an interpreter and so any reads of configuration
variables will fail -- we will enlighten the EvalSource to know
how to orchestrate destruction interpretation. The primary
difference is that we don't actually run the code, but *we do*
perform all of the necessary configuration and variable init.
* Associate the active interpreter with the plugin context as
we are executing, so that the host object can actually read the
state from the heap as requested to do so by attached plugins.
* Rename anything "engine" related to use the term "host"; this
avoids introducing unnecesarily new terminology.
* Add a new pkg/resource/provider/ package where we can begin
consolidating helper functionality for resource providers.
Right now, this includes a wrapper interface atop the gRPC
machinery necessary to contact the host, in addition to a
Main function that hides some boilerplate entrypoint code.
* Add a rpcutil.IsBenignCloseErr routine to let us ignore
"benign" gRPC errors that are knowingly returned at shutdown.
This commit completes pulumi/lumi#117.
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
This change implements showing a summary of the current environment.
All you need to do is run
$ lumi env
and the current environment's information will be printed.
This makes it convenient to grab resource information that might be
required, for instance, to correlate with logs (e.g., lambda ARNs).
Eventually, as per pulumi/lumi#184, we want to print details about
all of the resources too.
Adds an integration test that runs the following commands on the
AWS webserver example, failing if any command returns an error
code:
* lumijs
* lumi env init
* lumi config
* lumi plan
* lumi deploy
* lumi destroy
* lumi env rm
Also ensures that plan and deploy failures propagate errors through
to error codes at the CLI.
The recent change to run the interpreter and planner on separate goroutines
created the need to perform rendezvous-style synchronization between them.
Although the case of an invoked function properly tore down the synchronization
by communicating the error, we seldom directly invoke functions for JavaScript
programs because the way module entrypoint code ends up in initializers.
This requires that we propagate errors correctly out of module and class
initializers, in the standard way, so that the unwind makes its way to the top.
This fixespulumi/lumi#246.
We recently changed the Resource base type to have no constructor,
rather than a manual empty constructor. This ought to work just fine.
The LumiJS compiler indeed generates a constructor, however, it is
missing a body and when the interpreter tries to invoke it, we crash
with a nil reference panic. The runtime actually tolerates missing
constructors entirely, although the way LumiJS binds super calls
doesn't tolerate the missing base constructor. This change simply
generates such constructors in LumiJS with empty bodies.
In addition, I've added an error that will catch the empty body
problem during binding, since technically speaking, all functions
must have bodies. (Our runtime happens to support the notion of
"abstract", however, so we only fire the error on concrete functions.)
When a class has no constructor, we automatically generate an empty
constructor in the Lumipack.
This allows us to adhere to tslint rule suggesting leaving off empty
constructors with default signatures.
This change updates the ID/output propagation logic to properly handle
the case of replacements, in addition to accurately conveying the fact
that an update may change the values of output properties (but not the ID).
Also fixes a formatting issue with the replacement diffing displays.
This change introduces an OpSame planning step. The reason we need
this is so that we can apply the necessary output properties, including
the ID, even as we are simply walking the plan (i.e., when we aren't
actually performing a deployment). This ensures that the object state
evolves as required to let reads of output properties propagate in the
ways necessary to reproduce past executions of the program.
* Assert new things in new places.
* Log more interesting tidbits during evaluation.
* Invoke the OnStart hook before triggering initializers.
* Tolerate nil prev snapshots during deletion calculation.
* Handle and serialize missing resource IDs as output props.
* Return "done" flag from Rendezvous.Meet.
This change refactors a number of aspects of the CLI's treatment of
steps, in line with the new scheme, and a number of other miscellaneous
and minor fixes. It also regenerates all RPC code impacted by recent renames.
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.
The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.
The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages. This results in:
- pkg/resource: just the core resource data structures.
- pkg/resource/deployment: all planning and deployment logic.
- pkg/resource/environment: all environment, configuration, and
serialized checkpoint structures and logic.
- pkg/resource/plugin: all dynamically loaded analyzer and
provider logic, including the actual loading and RPC mechanisms.
This also splits the resource abstraction up. We now have:
- resource.Resource: a shared interface.
- resource.Object: a resource that is connected to a live object
that will periodically observe mutations due to ongoing
evaluation of computations. Snapshots of its state may be
taken; however, this is purely a "pre-planning" abstraction.
- resource.State: a snapshot of a resource's state that is frozen.
In other words, it is no longer connected to a live object.
This is what will store provider outputs (ID and properties),
and is what may be serialized into a deployment record.
The branch is in a half-baked state as of this change; more changes
are to come...
For lambdas which will execute at runtime,
we want to allow them to reference Node.js
global variables, like `console`.
This change makes Lumijs generated IL
incrementally more dynamic by preferring to
generate `TryLoadDynamic` over `LoadLocation`
for references to global variables (except for
references to imports).
Also introduces `console.log` in LumiJS, though
it is not yet attached to a Lumi global environment.
Fixes#174.
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties. This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.
This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.
I believe this fixes both #189 and #198.
This change eliminates the use of nonstandard tools in our build:
* `go test` automatically uses `GOMAXPROCS` for its parallelism
setting. In modern Go, this is now set to the number of processors.
So, there is no need to set it explicitly using `nproc`.
* We can avoid `realpath` in the `lumijs` executable by making it
a Node.js file and using a relative require import of main.
* We can avoid `realpath` in our Makefiles by just using `pwd`.
* Makeify more; add a "full build" target
This change uses make for more of our tree. Namely, the AWS provider
and LumiJS compilers each now use make to build and/or install them.
Not only does this bring about some consistency to how we build and
test things, but also made it easy to add a full build target:
$ make all
This target will build, test, and install the core Go tools, the LumiJS
compiler, and the AWS provider, in that order.
Each can be made in isolation, however, which ensures that the inner
loop for those is fast and so that, when it comes to finishing
pulumi/lumi#147, we can easily split them out and make from the top.
There are a few things that annoyed me about the way our CLI works with
directories when loading packages. For example, `lumi pack info some/pack/dir/`
never worked correctly. This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior. This is required to write tests that mock
certain aspects, like loading packages from the filesystem.
In theory, this change incurs zero behavioral changes.
This change fixes up a few things so that updates correctly deal
with output properties. This involves a few things:
1) All outputs stored on the pre snapshot need to get propagated
to the post snapshot during planning at various points. This
ensures that the diffing logic doesn't need to be special cased
everywhere, including both the Lumi and the provider sides.
2) Names are changed to "input" properties (using a new `lumi` tag
option, `in`). These are properties that providers are expected
to know nothing about, which we must treat with care during diffs.
3) We read back properties, via Get, after doing an Update just like
we do after performing a Create. This ensures that if an update
has a cascading impact on other properties, it will be detected.
4) Inspecting a change, prior to updating, must be done using the
computed property set instead of the real one. This is to avoid
mutating the resource objects ahead of actually applying a plan,
which would be wrong and misleading.
This change skips printing output<T> properties as we perform a
deployment, instead showing the real values inline after the resource
has been created. (output<T> is still shown during planning, of course.)
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors. After this change, we will only flow if
--logflow is passed, e.g. as in
$ lumi --logtostderr --logflow -v=9 deploy ...
This change prepares for integrating more planning and deployment logic
closer to the runtime itself. For historical reasons, we ended up with these
in the env.go file which really has nothing to do with deployments anymore.
This change introduces the notion of a computed versus an output
property on resources. Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan. Specifically:
* An output property is any property that is populated by the resource
provider, not code running in the Lumi type system. Because these
values aren't available during planning -- since we have not yet
performed the deployment operations -- they will be latent values in
our runtime and generally missing at the time of a plan. This is no
problem and we just want to avoid marshaling them in inopportune places.
* A computed property, on the other hand, is a different beast altogehter.
Although true one of these is missing a value -- by virtue of the fact
that they too are latent values, bottoming out in some manner on an
output property -- they will appear in serializable input positions.
Not only must we treat them differently during the RPC handshake and
in the resource providers, but we also want to guarantee they are gone
by the time we perform any CRUD operations on a resource. They are
purely a planning-time-only construct.
We need to smuggle metadata from the resource IDL all the way through
to the runtime, so that it knows which things are output properties. In
order to do this, we'll leverage decorators and the support for serializing
them as attributes. This change adds support for the various kinds
(class, property, method, and parameter), in addition to test cases.
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.
In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways. Namely,
operations against Latent<T>s generally produce new Latent<U>s.
During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values. An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange. As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values. My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.
For now, using an unresolved Latent<T> in a conditional will lead
to an error. See pulumi/lumi#67. Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support. That is a missing 1/3.
Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values. Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
Adds support for global secondary indexes on DynamoDB Tables.
Also adds a HashSet API to the AWS provider library. This handles part of #178,
providing a standard way for AWS provider implementations to compute set-based
diffs. This new API is used in both aws.dynamodb.Table and aws.elasticbeanstalk.Environment
currently.
Resolves#137.
This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.
A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.
LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.
Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
This change keeps the lumi prefix on our CLI tools.
As @lukehoban pointed out in person, as soon as we do pulumi/coconut#98,
most people (other than compiler authors themselves) won't actually be
typing the commands. And, furthermore, the commands aren't all that bad.
Eventually I assume we'll want something like `lumi-js`, or
`lumi-js-compiler`, so that binaries are discovered dynamically in a way
that is extensible for future languages. We can tackle this during #98.
This change implements property accessors (getters and setters).
The approach is fairly basic, but is heavily inspired by the ECMAScript5
approach of attaching a getter/setter to any property slot (even if we don't
yet fully exploit this capability). The evaluator then needs to track and
utilize the appropriate accessor functions when loading locations.
This change includes CocoJS support and makes a dent in pulumi/coconut#66.
This change, part of pulumi/coconut#62, adds support for ECMAScript
local functions. This leverages the recent support for lambdas.
The change also adds some new test cases for the various forms.
Here are some examples of supported forms:
function outer() {
// simple named inner function:
function inner1() { .. };
// anonymous inner function (just a lambda):
let inner2 = function() { ... };
// named and bound inner function:
let inner3 = function inner4() { ... };
}
These merely compile into lambdas that have been bound to local
variables with the appropriate names.
The previous shape of SequenceExpression only permitted expressions
in the sequence. This is pretty common in most ILs, however, it usually
leads to complicated manual spilling in the event that a statement is needed.
This is often necessary when, for example, a compiler is deeply nested in some
expression production, and then realizes the code expansion requires a
statement (e.g., maybe a new local variable must be declared, etc).
Instead of requiring complicated code-gen, this change permits SequenceExpression
to contain an arbitrary mixture of expression/statement prelude nodes, terminating
with a single, final Expression which yields the actual expression value. The
runtime bears the burden of implementing this which, frankly, is pretty trivial.
This change recognizes and emits lambdas correctly in CocoJS (as part
of pulumi/coconut#62). The existing CocoIL representation for lambdas
worked just fine for functions, lambdas, and local functions. There
still isn't runtime support, but that comes next.
I've tripped over pulumi/coconut#141 a few times now, particularly with
the sort of dynamic payloads required when creating lambdas and API gateways.
This change implements support for computed property initializers.
This change correctly implements package/module resolution in CIDLC.
For now, this only works for intra-package imports, which is sufficient
for now. Eventually we will need to support this (see pulumi/coconut#138).
* Use --out-rpc, rather than --out-provider, since rpc/ is a peer to provider/.
* Use strongly typed tokens in more places.
* Append "rpc" to the generated RPC package names to avoid conflicts.
* Change the Check function to return []mapper.FieldError, rather than
mapper.DecodeError, to make the common "no errors" case easier (and to eliminate
boilerplate resulting in needing to conditionally construct a mapper.DecodeError).
* Rename the diffs argument to just diff, matching the existing convention.
* Automatically detect changes to "replaces" properties in the PreviewUpdate
function. This eliminates tons of boilerplate in the providers and handles the
90% common case for resource recreation. It's still possible to override the
PreviewUpdate logic, of course, in case there is more sophisticated recreation
logic necessary than just whether a property changed or not.
* Add some comments on some generated types.
* Generate property constants for the names as they will appear in weakly typed
property bags. Although the new RPC interfaces are almost entirely strongly
typed, in the event that diffs must be inspected, this often devolves into using
maps and so on. It's much nicer to say `if diff.Changed(SecurityGroup_Description)`
than `if diff.Changed("description")` (and catches more errors at compile-time).
* Fix resource ID generation logic to properly fetch the Underlying() type on
named types (this would sometimes miss resources during property analysis, emitting
for example `*VPC` instead of `*resource.ID`).
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.
I've been kicking the tires with this locally enough to checkpoint the
current version. There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
This change introduces TryLoadDynamicExpression. This is similar to
the existing LoadDynamicExpression opcode, except that it will return
null in response to a missing member (versus the default of raising
an exception). This is to enable languages like JavaScript to encode
operations properly (which always yields undefined/nulls), while still
catering to languages like Python (which throw exceptions).
This change introduces decorator support for CocoJS and the corresponding
IL/AST changes to store them on definition nodes. Nothing consumes these
at the moment, however, I am looking at leveraging this to indicate that
certain program fragments are "code" and should be serialized specially
(in support of Functions-as-lambdas).
This uses %q to quote property values when printing them. This ensures
that control characters are escaped (like \n), in addition to replacing any
unprintable characters with the appropriate escape sequence. Both show up
nicer in the output for planning commands, etc.
This change introduces the scaffolding for a new cmd/cocogo tool,
as part of pulumi/coconut#133. The idea here is to do some very
rudimentary code-gen on a subset of Go, to ease the task of writing
providers. The README describes this in more detail. Eventually
this will presumably expand to being a peer language to CocoPy,
etc., in that real code can be written, but for now it's mostly IDL.
At the moment, the tool really doesn't do anything useful, other
than loading up, parsing, semantically validating, and spewing
some information about the Go packages passed at the command line.
This change restructures the overall structure for commands so that
all top-level tools are in the cmd/ directory, alongside the primary
coco command. This is more "idiomatic Go" in its layout, and makes
room for additional command line tools (like cocogo for IDL).
The old model for imports was to use top-level declarations on the
enclosing module itself. This was a laudible attempt to simplify
matters, but just doesn't work.
For one, the order of initialization doesn't precisely correspond
to the imports as they appear in the source code. This could incur
some weird module initialization problems that lead to differing
behavior between a language and its Coconut variant.
But more pressing as we work on CocoPy support, it doesn't give
us an opportunity to dynamically bind names in a correct way. For
example, "import aws" now needs to actually translate into a variable
declaration and assignment of sorts. Furthermore, that variable name
should be visible in the environment block in which it occurs.
This change switches imports to act like statements. For the most
part this doesn't change much compared to the old model. The common
pattern of declaring imports at the top of a file will translate to
the imports happening at the top of the module's initializer. This
has the effect of initializing the transitive closure just as it
happened previously. But it enables alternative models, like imports
inside of functions, and -- per the above -- dynamic name binding.
This rearranges the way dynamic loads work a bit. Previously, they¬
required an object, and did a dynamic lookup in the object's property¬
map. For real dynamic loads -- of the kind Python uses, obviously,¬
but also ECMAScript -- we need to search the "environment".
This change searches the environment by looking first in the lexical¬
scope in the current function. If a variable exists, we will use it.¬
If that misses, we then look in the module scope. If a variable exists¬
there, we will use it. Otherwise, if the variable is used in a non-lval
position, an dynamic error will be raised ("name not declared"). If
an lval, however, we will lazily allocate a slot for it.
Note that Python doesn't use block scoping in the same way that most
languages do. This behavior is simply achieved by Python not emitting
any lexically scoped blocks other than at the function level.
This doesn't perfectly achieve the scoping behavior, because we don't
yet bind every name in a way that they can be dynamically discovered.
The two obvious cases are class names and import names. Those will be
covered in a subsequent commit.
Also note that we are getting lucky here that class static/instance
variables aren't accessible in Python or ECMAScript "ambiently" like
they are in some languages (e.g., C#, Java); as a result, we don't need
to introduce a class scope in the dynamic lookup. Some day, when we
want to support such languages, we'll need to think about how to let
languages control the environment probe order; for instance, perhaps
the LoadDynamicExpression node can have an "environment" property.
We already had the ability to manually execute a CocoPack and generate
a DOT from its object graph. However, for demo purposes we also want
to be able to generate one from the plan. This adds a --dot flag to plan.
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment. It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!
In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."
By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)
The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag. All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.
As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
This change adds a `coco plan` command which is simply a shortcut
to the more verbose `coco deploy --dry-run`. This will make demos
flow nicer and elevates planning, an important activity, to a more
prominent position. The `--dry-run` (aka `-n`) flag is still there.
This change also renames `coco env destroy` to just `coco destroy`.
This is consistent with deploy and plan being at the top-level. We
now use `coco env` purely for evironment management commands (init,
config, rm, etc).
The word "fatal" makes it look like Coconut did something wrong, when in fact,
these messages are used to convey mis-usage of the command/argument/etc.
This change adds the ability to specify analyzers in two ways:
1) By listing them in the project file, for example:
analyzers:
- acmecorp/security
- acmecorp/gitflow
2) By explicitly listing them on the CLI, as a "one off":
$ coco deploy <env> \
--analyzer=acmecorp/security \
--analyzer=acmecorp/gitflow
This closes out pulumi/coconut#119.
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119. In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource. The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules). These run simultaneous to overall checking.
Analyzers are loaded as plugins just like providers are. The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.
This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
Deployments are central to the entire system; although technically
a deployment is indeed associated with an environment, the deployment
is the focus, not the environment, so it makes sense to put the
deployment command at the top-level.
Before, you'd say:
$ coco env deploy production
And now, you will say:
$ coco deploy production