Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.
The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.
This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.
This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.
As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
The last status message from the PPC doesn't include a newline. So the `pulumi` CLI renders any error messages on the same line as the last diagnostic message. Not ideal.
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
- `pulumi config ls` now does not prompt for a passphrase if there are
secrets, instead ******'s are shown. `--show-secrets` can be passed
to force decryption. The behavior of `pulumi config ls <key>` is
unchanged, if the key is secure, we will prompt for a passphrase.
- `pulumi config secret <key>` now prompts for the passphrase and verifies
it before asking for the secret value.
Fixes#465
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.
I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)
Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
Fixes panic when the output from the PPC doesn't have a "text" property. (Still need to unify the way the PPC emits event data with the types that the Pulumi codebase uses internally.)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).
As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.
We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow.
Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
When `PULUMI_RETAIN_CHECKPOINTS` is set in the environment, also write
the checkpoint file to <stack-name>.<ext>.<timestamp>.
This ensures we have historical information about every snapshot, which
would aid in debugging issues like #451. We set this to true for our
integration tests.
Fixes#453
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
This improves a few things about assets:
* Compute and store hashes as input properties, so that changes on
disk are recognized and trigger updates (pulumi/pulumi#153).
* Issue explicit and prompt diagnostics when an asset is missing or
of an unexpected kind, rather than failing late (pulumi/pulumi#156).
* Permit raw directories to be passed as archives, in addition to
archive formats like tar, zip, etc. (pulumi/pulumi#240).
* Permit not only assets as elements of an archive's member list, but
also other archives themselves (pulumi/pulumi#280).
The existing `SnapshotProvider` interface does not sufficiently lend
itself to reliable persistence of snapshot data. For example, consider
the following:
- The deployment engine creates a resource
- The snapshot provider fails to save the updated snapshot
In this scenario, we have no mechanism by which we can discover that the
existing snapshot (if any) does not reflect the actual state of the
resources managed by the stack, and future updates may operate
incorrectly. To address this, these changes split snapshotting into two
phases: the `Begin` phase and the `End` phase. A provider that needs to
be robust against the scenario described above (or any other scenario
that allows for a mutation to the state of the stack that is not
persisted) can use the `Begin` phase to persist the fact that there are
outstanding mutations to the stack. It would then use the `End` phase to
persist the updated snapshot and indicate that the mutation is no longer
outstanding. These steps are somewhat analogous to the prepare and
commit phases of two-phase commit.
I sometimes revert back to some ancient version of the system, and
I figure with so many other tools using different verbs here, it's
worth at least improving our help text with the SuggestFors.
The change to use a Goroutine for pumping output causes a hang
when an error occurs. This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur. This changes the logic to only wait
on the channel if we successfully began the operation in question.
Previously, config information was stored per stack. With this change,
we now allow config values which apply to every stack a program may
target.
When passed without the `-s <stack>` argument, `pulumi config`
operates on the "global" configuration. Stack specific information can
be modified by passing an explicit stack.
Stack specific configuration overwrites global configuration.
Conside the following Pulumi.yaml:
```
name: hello-world
runtime: nodejs
description: a hello world program
config:
hello-world:config:message Hello, from Pulumi
stacks:
production:
config:
hello-world:config:message Hello, from Production
```
This program contains a single configuration value,
"hello-world:config:message" which has the value "Hello, from Pulumi"
when the program is activated into any stack except for "production"
where the value is "Hello, from Production".
Instead of having information stored in the checkpoint file, save it
in the Pulumi.yaml file. We introduce a new section `stacks` which
holds information specific to a stack.
Next, we'll support adding configuration information that applies
to *all* stacks for a Program and allow the stack specific config to
overwrite or augment it.
This PR adds `login` and `logout` commands to the `pulumi` CLI.
Rather than requiring a user name and password like before, we instead require users to login with GitHub credentials on the Pulumi Console website. (You can do this now via https://beta.moolumi.io.) Once there, the account page will show you an "access token" you can use to authenticate against the CLI.
Upon successful login, the user's credentials will be stored in `~/.pulumi/credentials.json`. This credentials file will be automatically read with the credentials added to every call to `PulumiRESTCall`.
We use `git describe --tags` to construct a version number based on
the current version tag.
The properties VERSION (when using make) and Version (when using
MSBuild) can be explicitly set to use a fixed value instead.
Fixes#13
Previously, you had to fully qualify configuration values (e.g
example:config:message). As a convience, let's support adding
configuration values where the key is not a fully qualified module
member. In this case, we'll treat the key as if
`<program-name>:config:` had been prepended to it.
In addition, when we print config, shorten keys of the form
`<program-name>:config:<key-name>` to `<key-name>`.
I've updated one integration test to use the new syntax and left the
other as is to ensure both continue to work.
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.
From a user's point of view, there are a few changes:
1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).
On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
Our checkpoint file is now in a shape such that go's built in
marshalling and unmarshalling is sufficent, so we can remove the code
we had on our decode path to do extra validation
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
Internally we end up using flag to parse our command lines because of
our dependency on glog. When flag does not know about -C, if someone
passes -C before the sub command name (as is common for folks coming
from Git) the flag package terminates the program because -C was not
defined. So just teach flag this exists until we takle #301 and rid
ourselves of glog.
Previously, you could pass an explicit path to a Pulumi program when
running preview or update and the tool would use that program when
planning or deploying, but continue to write state in the cwd. While
being able to operate on a specific package without having to cd'd all
over over the place is nice, this specific implemntation was a little
scary because it made it easier to run two different programs with the
same local state (e.g config and checkpoints) which would lead to
surprising results.
Let's move to a model that some tools have where you can pass a
working directory and the tool chdir's to that directory before
running. This way any local state that is stored will be stored
relative to the package we are operating on instead of whatever the
current working directory is.
Fixes#398
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)