Commit graph

89 commits

Author SHA1 Message Date
CyrusNajmabadi
3e787779e5
Fix issue where replacements were not shown in the progress view. (#1357) 2018-05-11 16:48:05 -07:00
CyrusNajmabadi
e94e358ddf
Fix small logic bug when printing out messages. (#1348) 2018-05-09 14:25:03 -07:00
Pat Gavlin
97ace29ab1
Begin tracing Pulumi API calls. (#1330)
These changes enable tracing of Pulumi API calls.

The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.

In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
2018-05-07 18:23:03 -07:00
Justin Van Patten
d1b49d25f8
pulumi new improvements (#1307)
* Initialize a new stack as part of `pulumi new`
* Prompt for values with defaults preselected
* Install dependencies
* Prompt for default config values
2018-05-07 15:31:27 -07:00
CyrusNajmabadi
092696948d
Restore streaming of plugin outputs to the progress display. (#1333) 2018-05-07 15:11:52 -07:00
joeduffy
7c7f6d3ed7 Bring back preview, swizzle some flags
This changes the CLI interface in a few ways:

* `pulumi preview` is back!  The alternative of saying
  `pulumi update --preview` just felt awkward, and it's a common
  operation to want to perform.  Let's just make it work.

* There are two flags consistent across all update commands,
  `update`, `refresh`, and `destroy`:

    - `--skip-preview` will skip the preview step.  Note that this
      does *not* skip the prompt to confirm that you'd like to proceed.
      Indeed, it will still prompt, with a little warning text about
      the fact that the preview has been skipped.

    * `--yes` will auto-approve the updates.

This lands us in a simpler and more intuitive spot for common scenarios.
2018-05-06 13:55:39 -07:00
joeduffy
6ad785d5c4 Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.

I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted.  Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.

In a sense, there are four options here:

1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).

This change enables all four workflows in our CLI.

Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in.  The
following are the values which correlate to the above four modes:

1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether

As part of this change, I redid a bit of how the preview modes
were specified.  Rather than booleans, which had some illegal
combinations, this change introduces a new enum type.  Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean.  As of
this change, the backend.PreviewBehavior controls the preview options.
2018-05-06 13:55:04 -07:00
CyrusNajmabadi
0b3395aae3
Simplify code that decides to show stack outputs. (#1332) 2018-05-06 11:30:20 -07:00
CyrusNajmabadi
aaca79ab16
Print stack outputs at the end of an update. (#1327) 2018-05-05 12:54:57 -07:00
Joe Duffy
088d6800d9
Improve two minor UX things in the CLI (#1293)
This changes three minor UX things in the CLI's update flow:

1. Use bright blue for column headers, since the dark blue is nearly
   invisible on a black background.

2. Move the operation symbol to the far left, as a sort of "bullet
   point" for each line of output.  This was how they initial symbols
   were designed and this helps to glance at the summary to see what's
   going on.

3. Shorten some of the column headers that didn't add extra clarity.
2018-04-30 12:31:57 -07:00
Sean Gillespie
14baf866f6
Snapshot management overhaul and refactor (#1273)
* Refactor the SnapshotManager interface

Lift snapshot management out of the engine by delegating it to the
SnapshotManager implementation in pkg/backend.

* Add a event interface for plugin loads and use that interface to record plugins in the snapshot

* Remove dead code

* Add comments to Events

* Add a number of tests for SnapshotManager

* CR feedback: use a successful bit on 'End' instead of having a separate 'Abort' API

* CR feedback

* CR feedback: register plugins one-at-a-time instead of the entire state at once
2018-04-25 17:20:08 -07:00
CyrusNajmabadi
3b13803c71
Add a hidden --no-interactive flag so that we can reduce interactive noise when running jenkins. (#1262) 2018-04-24 14:23:08 -07:00
CyrusNajmabadi
42d7174c70
Add a new tree-view form for the progress view. (#1260) 2018-04-24 11:13:22 -07:00
Sean Gillespie
fba87909a0
Re-introduce interface for snapshot management (#1254)
* Re-introduce interface for snapshot management

Snapshot management was done through the Update interface; this commit
splits it into a separate interface

* Put the SnapshotManager instance onto the engine context

* Remove SnapshotManager from planContext and updateActions now that it can be accessed by engine Context
2018-04-23 14:12:13 -07:00
CyrusNajmabadi
7807edc166
Simplifying progress code (#1253) 2018-04-22 18:10:19 -07:00
Matt Ellis
15e2ad27fe Address code review feedback 2018-04-20 02:34:10 -04:00
Matt Ellis
cc938a3bc8 Merge remote-tracking branch 'origin/master' into ellismg/identity 2018-04-20 01:56:41 -04:00
Matt Ellis
04e5dfde5f Address code review feedback 2018-04-20 01:31:14 -04:00
Pat Gavlin
4fa69bfd72
Plumb basic cancellation through the engine. (#1231)
hese changes plumb basic support for cancellation through the engine.
Two types of cancellation are supported for all engine operations:
- Cancellation, which waits for the operation to drive itself to a safe
  point before the operation returns, and
- Termination, which does not wait for the operation to drive itself
  to a safe opint for the operation returns.

When updating local or managed stacks, a single ^C triggers cancellation
of any running operation; a second ^C will trigger termination.

Fixes #513, #1077.
2018-04-19 18:59:14 -07:00
CyrusNajmabadi
e8485c2388
Lighten our dependency on the docker cli (#1238) 2018-04-19 15:55:24 -07:00
CyrusNajmabadi
ffcd9fde7b
Show stdout events at hte bottom of the progress view. (#1236) 2018-04-19 14:44:53 -07:00
CyrusNajmabadi
41dbd9a62e
Cleanup work before putting in the ability to have system messages shown in hte progress view. (#1234) 2018-04-19 12:31:47 -07:00
Joe Duffy
3b8d0e6d96
Merge pull request #1148 from pulumi/1081_refresh_cmd
Implement a refresh command
2018-04-18 12:12:33 -07:00
joeduffy
b77403b4bb Implement a refresh command
This change implements a `pulumi refresh` command.  It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.

It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events.  This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state.  This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion).  The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.

Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint.  That
will be coming soon ...
2018-04-18 10:57:16 -07:00
Matt Ellis
56d7f8eb24 Support new stack identity for the cloud backend
This change introduces support for using the cloud backend when
`pulumi init` has not been run. When this is the case, we use the new
identity model, where a stack is referenced by an owner and a stack
name only.

There are a few things going on here:

- We add a new `--owner` flag to `pulumi stack init` that lets you
  control what account a stack is created in.

- When listing stacks, we show stacks owned by you and any
  organizations you are a member of. So, for example, I can do:

  * `pulumi stack init my-great-stack`
  * `pulumi stack init --owner pulumi my-great-stack`

  To create a stack owned by my user and one owned by my
  organization. When `pulumi stack ls` is run, you'll see both
  stacks (since they are part of the same project).

- When spelling a stack on the CLI, an owner can be optionally
  specified by prefixing the stack name with an owner name. For
  example `my-great-stack` means the stack `my-great-stack` owned by
  the current logged in user, where-as `pulumi/my-great-stack` would
  be the stack owned by the `pulumi` organization

- `--all` can be passed to `pulumi stack ls` to see *all* stacks you
  have access to, not just stacks tied to the current project.
2018-04-18 04:54:02 -07:00
Matt Ellis
c0b2c4f17f Introduce backend.StackReference
Long term, a stack name alone will not be sufficent to address a
stack. Introduce a new `backend.StackReference` interface that allows
each backend to give an opaque stack reference that can be used across
operations.
2018-04-18 04:54:02 -07:00
Matt Ellis
bac02f1df1 Remove the need to pulumi init for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.

Previously, `pulumi init` logically did two things:

1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.

2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com

The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.

However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.

For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.

For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.

This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).

With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-18 04:53:49 -07:00
Pat Gavlin
a4d6cba664 Add a Version field to UntypedDeployment.
This field indicates the schema of the serialized deployment. This field
behaves identically to the `Version` field of
`PatchUpdateCheckpointRequest`.

This is part of pulumi/pulumi-service#1046
2018-04-17 16:23:20 -07:00
CyrusNajmabadi
4a5e72fb16
Progress tweaks (#1219) 2018-04-16 23:41:00 -07:00
Sean Gillespie
55711e4ca3
Revert "Lift snapshot management out of the engine and serialize writes to snapshot (#1069)" (#1216)
This reverts commit 2c479c172d.
2018-04-16 23:04:56 -07:00
CyrusNajmabadi
0bd3036115
Small progress tweaks. (#1218) 2018-04-16 19:46:57 -07:00
CyrusNajmabadi
541b8b3f4e
Switch to a new grid-view for the progress display. (#1201) 2018-04-15 12:47:53 -07:00
CyrusNajmabadi
f2b9bd4b13
Remove the explicit 'pulumi preview' command. (#1170)
Old command still exists, but tells you to run "pulumi update --preview".
2018-04-13 22:26:01 -07:00
CyrusNajmabadi
7b96b8cdcf
Produce a single message for the text we receive when running, not a message per line of output. (#1191) 2018-04-13 15:44:35 -07:00
CyrusNajmabadi
8370a0a38e
Improve how we print out failures when doing progress updates. (#1159)
Preemptively merging in.  Let me know if there are any changes you want me to make.
2018-04-12 10:56:39 -07:00
Sean Gillespie
2c479c172d
Lift snapshot management out of the engine and serialize writes to snapshot (#1069)
* Lift snapshot management out of the engine

This PR is a prerequisite for parallelism by addressing a major problem
that the engine has to deal with when performing parallel resource
construction: parallel mutation of the global snapshot. This PR adds
a `SnapshotManager` type that is responsible for maintaining and
persisting the current resource snapshot. It serializes all reads and
writes to the global snapshot and persists the snapshot to persistent
storage upon every write.

As a side-effect of this, the core engine no longer needs to know about
snapshot management at all; all snapshot operations can be handled as
callbacks on deployment events. This will greatly simplify the
parallelization of the core engine.

Worth noting is that the core engine will still need to be able to read
the current snapshot, since it is interested in the dependency graphs
contained within. The full implications of that are out of scope of this
PR.

Remove dead code, Steps no longer need a reference to the plan iterator that created them

Fixing various issues that arise when bringing up pulumi-aws

Line length broke the build

Code review: remove dead field, fix yaml name error

Rebase against master, provide implementation of StackPersister for cloud backend

Code review feedback: comments on MutationStatus, style in snapshot.go

Code review feedback: move SnapshotManager to pkg/backend, change engine to use an interface SnapshotManager

Code review feedback: use a channel for synchronization

Add a comment and a new test

* Maintain two checkpoints, an immutable base and a mutable delta, and
periodically merge the two to produce snapshots

* Add a lot of tests - covers all of the non-error paths of BeginMutation and End

* Fix a test resource provider

* Add a few tests, fix a few issues

* Rebase against master, fixed merge
2018-04-12 09:55:34 -07:00
Chris Smith
ab2385437a
Validate stack properties like names, runtime, etc. (#1146)
* Validate stack properties like names, runtime, etc.

* Fix build error
2018-04-11 10:08:32 -07:00
CyrusNajmabadi
a759f2e085
Switch to a resource-progress oriented view for pulumi preview/update/destroy (#1116) 2018-04-10 12:03:11 -07:00
Luke Hoban
31b6aa899a
Fix the indentation of output property rendering (#1127)
We indent the input properties 1 further than the base indent, but were not doing the same for output properties.
2018-04-05 22:27:04 -07:00
Matt Ellis
d3240fdc64 Require pulumi login before commands that need a backend
This change does three major things:

1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).

2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.

3. Prompt for login in places where you have to log in, if you are not
already logged in.
2018-04-05 10:19:41 -07:00
CyrusNajmabadi
4b761f9fc1
Include richer information in events so that final display can flexibly chose how to present it. (#1088) 2018-03-31 12:08:48 -07:00
Pat Gavlin
713a38dcb6
Merge pull request #1083 from pulumi/GetStack
Remove some calls to `...repo/project/stacks`.
2018-03-28 15:48:33 -07:00
Pat Gavlin
7fb48b7658 Fix the local implementation of GetStack. 2018-03-28 15:22:13 -07:00
joeduffy
8b5874dab5 General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):

* The primary change is to change the way the engine's core update
  functionality works with respect to deploy.Source.  This is the
  way we can plug in new sources of resource information during
  planning (and, soon, diffing).  The way I intend to model refresh
  is by having a new kind of source, deploy.RefreshSource, which
  will let us do virtually everything about an update/diff the same
  way with refreshes, which avoid otherwise duplicative effort.

  This includes changing the planOptions (nee deployOptions) to
  take a new SourceFunc callback, which is responsible for creating
  a source specific to the kind of plan being requested.

  Preview, Update, and Destroy now are primarily differentiated by
  the kind of deploy.Source that they return, rather than sprinkling
  things like `if Destroying` throughout.  This tidies up some logic
  and, more importantly, gives us precisely the refresh hook we need.

* Originally, we used the deploy.NullSource for Destroy operations.
  This simply returns nothing, which is how Destroy works.  For some
  reason, we were no longer doing this, and instead had some
  `if Destroying` cases sprinkled throughout the deploy.EvalSource.
  I think this is a vestige of some old way we did configuration, at
  least judging by a comment, which is apparently no longer relevant.

* Move diff and diff-printing logic within the engine into its own
  pkg/engine/diff.go file, to prepare for upcoming work.

* I keep noticing benign diffs anytime I regenerate protobufs.  I
  suspect this is because we're also on different versions.  I changed
  generate.sh to also dump the version into grpc_version.txt.  At
  least we can understand where the diffs are coming from, decide
  whether to take them (i.e., a newer version), and ensure that as
  a team we are monotonically increasing, and not going backwards.

* I also tidied up some tiny things I noticed while in there, like
  comments, incorrect types, lint suppressions, and so on.
2018-03-28 07:45:23 -07:00
Pat Gavlin
75d75a41c0 Enable logs for managed stacks.
Simply fetch the checkpoint and use the local config to process logs.

Fixes #1078.
2018-03-27 14:29:10 -07:00
Pat Gavlin
58300f9ee7
Add support for service-managed stacks. (#1067)
These changes add the API types and cloud backend code necessary to
interact with service-managed stacks (i.e. stacks that do not have
PPC-managed deployments). The bulk of these changes are unremarkable:
the API types are straightforward, as are most of the interactions with
the new APIs. The trickiest bits are token and log management.

During an update to a managed stack, the CLI must continually renew the
token used to authorize the operations on that stack that comprise the
update. Once a token has been renewed, the old token should be
discarded. The CLI supports this by running a goroutine that is
responsible for both periodically renewing the token for an update and
servicing requests for the token itself from the rest of the backend.

In addition to token renewal, log output must be captured and uploaded
to the service during an update to a managed stack. Implementing this in
a reasonable fashion required a bit of refactoring in order to reuse
what already exists for the local backend. Each event-specific `Display`
function was replaced with an equivalent `Render` function that returns
a string rather than writing to a stream. This approach was chosen
primarily to avoid dealing with sheared colorization tags, which would
otherwise require clients to fuse log lines before colorizing. We could
take that approach in the future.
2018-03-22 10:42:43 -07:00
Pat Gavlin
a23b10a9bf
Update the copyright end date to 2018. (#1068)
Just what it says on the tin.
2018-03-21 12:43:21 -07:00
Matt Ellis
cd64462a9d Export display helper functions
This allows some upstack components that need to consume engine events
use the common display logic we have here.
2018-03-14 14:02:30 -07:00
Matt Ellis
936cab0c22 Add a version property to checkpoints
This takes the existing `apitype.Checkpoint` type and renames it to
`apitype.CheckpointV1` locking in the shape. In addition, we introduce
a `apitype.VersionedCheckpoint` type, which holds a version number and
a json document representing a checkpoint at that version. Now, when
reading a checkpoint, the CLI can determine if it's in a format it
understands, and fail gracefully if it is not.

While the CLI understands the older checkpoint version, it always
writes the newest version format, meaning that if you manage a
fire-and-forget stack with this version of the CLI, it will be
un-readable by previous versions.

Stacks managed by Pulumi.com are not impacted by this change.

Fixes: #887
2018-03-10 13:03:05 -08:00
Sean Gillespie
703a954839
Improve error messages output by the CLI (#1011)
* Improve error messages output by the CLI

This fixes a couple known issues with the way that we present errors
from the Pulumi CLI:
    1. Any errors from RPC endpoints were bubbling up as they were to
    the top-level, which was unfortunate because they contained
    RPC-specific noise that we don't want to present to the user. This
    commit unwraps errors from resource providers.
    2. The "catastrophic error" message often got printed twice
    3. Fatal errors are often printed twice, because our CLI top-level
    prints out the fatal error that it receives before exiting. A lot of
    the time this error has already been printed.
    4. Errors were prefixed by PU####.

* Feedback: Omit the 'catastrophic' error message and use a less verbose error message as the final error

* Code review feedback: interpretRPCError -> resourceStateAndError

* Code review feedback: deleting some commented-out code, error capitalization

* Cleanup after rebase
2018-03-09 15:43:16 -08:00