Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.
The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.
This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.
This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.
As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
The last status message from the PPC doesn't include a newline. So the `pulumi` CLI renders any error messages on the same line as the last diagnostic message. Not ideal.
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.
I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)
Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
Fixes panic when the output from the PPC doesn't have a "text" property. (Still need to unify the way the PPC emits event data with the types that the Pulumi codebase uses internally.)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).
As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.
We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow.
Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
I sometimes revert back to some ancient version of the system, and
I figure with so many other tools using different verbs here, it's
worth at least improving our help text with the SuggestFors.
The change to use a Goroutine for pumping output causes a hang
when an error occurs. This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur. This changes the logic to only wait
on the channel if we successfully began the operation in question.
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.
From a user's point of view, there are a few changes:
1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).
On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
Previously, you could pass an explicit path to a Pulumi program when
running preview or update and the tool would use that program when
planning or deploying, but continue to write state in the cwd. While
being able to operate on a specific package without having to cd'd all
over over the place is nice, this specific implemntation was a little
scary because it made it easier to run two different programs with the
same local state (e.g config and checkpoints) which would lead to
surprising results.
Let's move to a model that some tools have where you can pass a
working directory and the tool chdir's to that directory before
running. This way any local state that is stored will be stored
relative to the package we are operating on instead of whatever the
current working directory is.
Fixes#398
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit. This
makes targets ("husks") more of a first class thing.
Namely, you must first initialize a husk before using it:
$ coco husk init staging
Coconut husk 'staging' initialized; ready for deployments
Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments. The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:
$ ... make some changes ...
$ coco husk deploy staging
... standard deployment progress spew ...
Finally, should you want to teardown an entire environment:
$ coco husk destroy staging
... standard deletion progress spew for all resources ...
Coconut husk 'staging' has been destroyed!
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update. This simplifies the management of deployment
records, checkpoints, and snapshots.
I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming). The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:
nutpack/
bin/
Nutpack.json
... any other compiled artifacts ...
husks/
... one snapshot per husk ...
For example, if we had a stage and prod husk, we would have:
nutpack/
bin/...
husks/
prod.json
stage.json
In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment. These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.
The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
This change shows detailed output -- resources, their properties, and
a full articulation of plan steps -- and permits summarization with the
--summary (or -s) flag.
This change is a first whack at implementing updates.
Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order. For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies). These are just special cases of the more
general idea of performing DAG operations in dependency order.
Updates must work in terms of this more general notion. For example:
* It is an error to delete a resource while another refers to it; thus,
resources are deleted after deleting dependents, or after updating
dependent properties that reference the resource to new values.
* It is an error to depend on a create a resource before it is created;
thus, resources must be created before dependents are created, and/or
before updates to existing resource properties that would cause them
to refer to the new resource.
Of course, all of this is tangled up in a graph of dependencies. As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.
To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
This change repivots the plan/apply commands slightly. This is largely
in preparation for performing deletes and updates of existing environments.
The old way was slightly confusing and made things appear more "magical"
than they actually are. Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.
The new way is to offer three commands: create, update, and delete. Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely. The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.
Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands. This will print out the plan without
actually performing any opterations.
All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.