Commit graph

36 commits

Author SHA1 Message Date
Chris Smith 4c217fd358
Add "pulumi history" command (#826)
This PR adds a new `pulumi history` command, which prints the update history for a stack.

The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.

`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.

I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.

Fixes #636.
2018-01-24 18:22:41 -08:00
Matt Ellis 41db5d881d Change --preview's short flag back to -n
In #806 we unintentionally changed the short flag for preview from
`-n` to `-d`. This restores the old flag.
2018-01-19 09:45:54 -08:00
Chris Smith 3a3d0698ae
Surface update options to the service (#806)
This PR surfaces the configuration options available to updates, previews, and destroys to the Pulumi Service. As part of this I refactored the options to unify them into a single `engine.UpdateOptions`, since they were all overlapping to various degrees.

With this PR we are adding several new flags to commands, e.g. `--summary` was not available on `pulumi destroy`.

There are also a few minor breaking changes.

- `pulumi destroy --preview` is now `pulumi destroy --dry-run` (to match the actual name of the field).
- The default behavior for "--color" is now `Always`. Previously it was `Always` or `Never` based on the value of a `--debug` flag. (You can specify `--color always` or `--color never` to get the exact behavior.)

Fixes #515, and cleans up the code making some other features slightly easier to add.
2018-01-18 11:10:15 -08:00
pat@pulumi.com c56e716c31 Refactor the engine's entrypoints.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.

These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
2018-01-08 14:15:16 -08:00
Joe Duffy f0c28db639
Attempt to fix colorization (#740)
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw).  The events from the service, however, are still
boolean-valued.  This changes the message payload to carry the full values.
2017-12-18 11:42:32 -08:00
CyrusNajmabadi e4946a6620
Allow users to control if and how output is colorized. (#718)
Part of the work to make it easier to tests of diff output.  Specifically, we now allow users to pass --color=option for several pulumi commands.  'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).  

The meaning of these flags are:

1. auto: colorize normally, unless in --debug 
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes.   This is for testing purposes and ensures we can have test for this stuff across platform.
2017-12-14 11:53:02 -08:00
joeduffy 1c4e41b916 Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.

Now whether a stack is local or cloud is inherent to the stack
itself.  If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally.  Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.

For example, to initialize a new cloud stack, simply:

    $ pulumi login
    Logging into Pulumi Cloud: https://pulumi.com/
    Enter Pulumi access token: <enter your token>
    $ pulumi stack init my-cloud-stack

Note that you may log into a specific cloud if you'd like.  For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:

    $ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873

The cloud is now the default.  If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:

    $ pulumi stack init my-faf-stack --local

If you are logged in and run `pulumi`, we tell you as much:

    $ pulumi
    Usage:
      pulumi [command]

    // as before...

    Currently logged into the Pulumi Cloud ☁️
        https://pulumi.com/

And if you list your stacks, we tell you which one is local or not:

    $ pulumi stack ls
    NAME            LAST UPDATE       RESOURCE COUNT   CLOUD URL
    my-cloud-stack  2017-12-01 ...    3                https://pulumi.com/
    my-faf-stack    n/a               0                n/a

And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.

I shall write up more details and make sure to document these changes.

This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends.  The following is the overall resulting package architecture:

* The backend.Backend interface can be implemented to substitute
  a new backend.  This has operations to get and list stacks,
  perform updates, and so on.

* The backend.Stack struct is a wrapper around a stack that has
  or is being manipulated by a Backend.  It resembles our existing
  Stack notions in the engine, but carries additional metadata
  about its source.  Notably, it offers functions that allow
  operations like updating and deleting on the Backend from which
  it came.

* There is very little else in the pkg/backend/ package.

* A new package, pkg/backend/local/, encapsulates all local state
  management for "fire and forget" scenarios.  It simply implements
  the above logic and contains anything specific to the local
  experience.

* A peer package, pkg/backend/cloud/, encapsulates all logic
  required for the cloud experience.  This includes its subpackage
  apitype/ which contains JSON schema descriptions required for
  REST calls against the cloud backend.  It also contains handy
  functions to list which clouds we have authenticated with.

* A subpackage here, pkg/backend/state/, is not a provider at all.
  Instead, it contains all of the state management functions that
  are currently shared between local and cloud backends.  This
  includes configuration logic -- including encryption -- as well
  as logic pertaining to which stacks are known to the workspace.

This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 14:34:42 -08:00
Matt Ellis 8f076b7cb3 Argument validation for CLI commands
Previously, we were inconsistent on how we handled argument validation
in the CLI. Many commands used cobra.Command's Args property to
provide a validator if they took arguments, but commands which did not
rarely used cobra.NoArgs to indicate this.

This change does two things:

1. Introduce `cmdutil.ArgsFunc` which works like `cmdutil.RunFunc`, it
wraps an existing cobra type and lets us control the behavior when an
arguments validator fails.

2. Ensure every command sets the Args property with an instance of
cmdutil.ArgsFunc. The cmdutil package defines wrapers for all the
cobra validators we are using, to prevent us from having to spell out
`cmduitl.ArgsFunc(...)` everywhere.

Fixes #588
2017-11-29 16:10:53 -08:00
Matt Ellis 07b4d9b36b Add Pulumi.com backend, unify cobra Commands
As part of the unification it became clear where we did not support
features that we had for the local backend. I opened issues and added
comments.
2017-11-02 11:19:00 -07:00
Matt Ellis 328734f874 Define backend interface, move local implementation behind it
This change introduces an abstraction for a `backend` which manages
the implementation of some CLI commands. As part of defining the
interface, we introduce a new local backend implementation that just
uses data local to the machine.

This will let us share argument parsing and some display information
between the local case and the pulumi.com case in the CLI. We can
continue to refine this interface over time (e.g. today we have the
implementation of the Destroy/Update/Preview actually writing output
but instead they should be returning strongly typed data that the CLI
knows how to display and is unified across Pulumi.com deploys and
local deploys).

But this is a good first step.
2017-11-02 11:19:00 -07:00
Chris Smith dfcc165840
Update API types to match HEAD (#509)
Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.

The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.

This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
2017-10-31 15:03:07 -07:00
Chris Smith c286712d28
Remove args we can now get from the repository and package (#501)
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.

This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.

As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
2017-10-30 17:47:12 -07:00
Chris Smith d80cba135a
Add newline after update completes (#487)
The last status message from the PPC doesn't include a newline. So the `pulumi` CLI renders any error messages on the same line as the last diagnostic message. Not ideal.
2017-10-27 15:40:15 -07:00
Chris Smith 9b3dd54385 Enable 'pulumi stack init' to the Cloud (#480)
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.

I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)

Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
2017-10-26 22:14:56 -07:00
Chris Smith f52e7233f9 Fix panic from incorrect assumption (#473)
Fixes panic when the output from the PPC doesn't have a "text" property. (Still need to unify the way the PPC emits event data with the types that the Pulumi codebase uses internally.)
2017-10-25 15:28:29 -07:00
Chris Smith 95062100f7 Enable pulumi update to target the Console (#461)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).

As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.

We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow. 

Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
2017-10-25 10:46:05 -07:00
Matt Ellis ade366544e Encrypt secrets in Pulumi.yaml
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.

The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).

Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).

In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.

Secure values show up as unecrypted config values inside the language
hosts and providers.
2017-10-24 16:48:12 -07:00
joeduffy 500ea0b572 Fix diag channel errors
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
2017-10-22 15:22:15 -07:00
joeduffy 4d19b358a6 Add some command hints
I sometimes revert back to some ancient version of the system, and
I figure with so many other tools using different verbs here, it's
worth at least improving our help text with the SuggestFors.
2017-10-20 17:36:47 -07:00
joeduffy 9e20f15adf Fix CLI hangs when errors occur
The change to use a Goroutine for pumping output causes a hang
when an error occurs.  This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur.  This changes the logic to only wait
on the channel if we successfully began the operation in question.
2017-10-20 17:28:35 -07:00
Matt Ellis 22c9e0471c Use Stack over Environment to describe a deployment target
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.

From a user's point of view, there are a few changes:

1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).

On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
2017-10-16 13:04:20 -07:00
Matt Ellis e7c3aaba69 Merge pull request #395 from pulumi/pulumi-service-interface
More engine refactoring
2017-10-11 13:44:36 -07:00
joeduffy d52c29b763 Make "up" a short-alias for "update" 2017-10-10 08:52:04 -07:00
Matt Ellis 377eb61e32 Always emit debug events into the stream 2017-10-09 18:27:05 -07:00
Matt Ellis 7587bcd7ec Have engine emit "events" instead of writing to streams
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).

While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.

As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).

The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
2017-10-09 18:24:56 -07:00
Matt Ellis 065f6f2b42 Support -C/--cwd instead of path to package
Previously, you could pass an explicit path to a Pulumi program when
running preview or update and the tool would use that program when
planning or deploying, but continue to write state in the cwd. While
being able to operate on a specific package without having to cd'd all
over over the place is nice, this specific implemntation was a little
scary because it made it easier to run two different programs with the
same local state (e.g config and checkpoints) which would lead to
surprising results.

Let's move to a model that some tools have where you can pass a
working directory and the tool chdir's to that directory before
running. This way any local state that is stored will be stored
relative to the package we are operating on instead of whatever the
current working directory is.

Fixes #398
2017-10-06 11:27:18 -07:00
Matt Ellis 93ab134bbb Have the CLI keep track of the current environment
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)
2017-10-02 16:57:41 -07:00
Matt Ellis d29f6fc4e5 Use tokens.QName instead of string as the type for environment
Internally, the engine deals with tokens.QName and not raw
strings. Push that up to the API boundary
2017-10-02 15:14:55 -07:00
Matt Ellis c022db9285 Require environment name on deployment APIs
Deployments always need to be done in the context of some environment,
so let's make the argument explicit instead of it coming in the
property bag
2017-10-02 11:14:27 -07:00
pat@pulumi.com 69341fa7c8 push is dead; long live update.
After discussion with Joe and Luke, we've decided to use `update` instead
of `push` as it more intuitively fits the operation being performed.
2017-09-22 17:23:40 -07:00
joeduffy 2f60a414c7 Reorganize deployment commands
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit.  This
makes targets ("husks") more of a first class thing.

Namely, you must first initialize a husk before using it:

    $ coco husk init staging
    Coconut husk 'staging' initialized; ready for deployments

Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments.  The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:

    $ ... make some changes ...
    $ coco husk deploy staging
    ... standard deployment progress spew ...

Finally, should you want to teardown an entire environment:

    $ coco husk destroy staging
    ... standard deletion progress spew for all resources ...
    Coconut husk 'staging' has been destroyed!
2017-02-26 11:20:14 -08:00
joeduffy 977b16b2cc Add basic targeting capability
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update.  This simplifies the management of deployment
records, checkpoints, and snapshots.

I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming).  The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:

    nutpack/
        bin/
            Nutpack.json
            ... any other compiled artifacts ...
        husks/
            ... one snapshot per husk ...

For example, if we had a stage and prod husk, we would have:

    nutpack/
        bin/...
        husks/
            prod.json
            stage.json

In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment.  These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.

The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
2017-02-25 09:24:52 -08:00
joeduffy 14762df98b Flip the summarization polarity
This change shows detailed output -- resources, their properties, and
a full articulation of plan steps -- and permits summarization with the
--summary (or -s) flag.
2017-02-25 07:55:22 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy 86bfe5961d Implement updates
This change is a first whack at implementing updates.

Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order.  For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies).  These are just special cases of the more
general idea of performing DAG operations in dependency order.

Updates must work in terms of this more general notion.  For example:

* It is an error to delete a resource while another refers to it; thus,
  resources are deleted after deleting dependents, or after updating
  dependent properties that reference the resource to new values.

* It is an error to depend on a create a resource before it is created;
  thus, resources must be created before dependents are created, and/or
  before updates to existing resource properties that would cause them
  to refer to the new resource.

Of course, all of this is tangled up in a graph of dependencies.  As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.

To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
2017-02-23 14:56:23 -08:00
joeduffy 8d71771391 Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly.  This is largely
in preparation for performing deletes and updates of existing environments.

The old way was slightly confusing and made things appear more "magical"
than they actually are.  Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.

The new way is to offer three commands: create, update, and delete.  Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely.  The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.

Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands.  This will print out the plan without
actually performing any opterations.

All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 11:21:26 -08:00