* Delete Before Create
This commit implements the full semantics of delete before create. If a
resource is replaced and requires deletion before creation, the engine
will use the dependency graph saved in the snapshot to delete all
resources that depend on the resource being replaced prior to the
deletion of the resource to be replaced.
* Rebase against master
* CR: Simplify the control flow in makeRegisterResourceSteps
* Run Check on new inputs when re-creating a resource
* Fix an issue where the planner emitted benign but incorrect deletes of DBR-deleted resources
* CR: produce the list of dependent resources in dependency order and iterate over the list in reverse
* CR: deps->dependents, fix an issue with DependingOn where duplicate nodes could be added to the dependent set
* CR: Fix an issue where we were considering old defaults and new inputs
inappropriately when re-creating a deleted resource
* CR: save 'iter.deletes[urn]' as a local, iterate starting at cursorIndex + 1 for dependency graph
At some point the summary and preview parameters to
`engine.GetResourcePropertiesDetails` were flipped s.t. we stopped
rendering computed<> and output<> values properly during previews.
We currently display at most two rows for messages that are not
associated with a URN: one for those that arrive before the stack
resource is first observed and one for those that arrive after the stack
resource is first observed. The former is labeled "global"; the latter
is the row associated with the stack resource. These changes remove the
duplication by reassociating the row used for events with no URN that
arrive before the stack resource with the stack resource once it has
been observed.
Fixes#1385.
Our current strategy for the progress display on non-TTYs causes us to
display multiple identical rows for each resource when the row is a
preview or a no-op. This behavior is not particularly useful, and
generally just makes the display noisier than it needs to be.
These changes avoid displaying resource rows without meaningful output
by supressing the display of resource output events that are delivered
during a preview or that correspond to a no-op update.
Fixes#1384.
This changes two things:
1) Eliminates the fact that we had two kinds of previews in our engine.
2) Always initialize the plugin.Events, to ensure that all plugin loads
are persisted no matter the update type (update, refresh, destroy),
and skip initializing it when dryRun == true, since we won't save them.
The PluginEvents will now try to register loaded plugins which,
during a refresh preview, will result in attempting to save mutations
when a token is missing. This change mirrors the changes made to
destroy which avoid it panicing similarly, by simply leaving
PluginEvents unset. Also adds a bit of tracing that was helpful to
me as I debugged through the underlying issues.
Fixes#1377.
* Graceful RPC shutdown: CLI side
* Handle unavailable resource monitor in language hosts
* Fix a comment
* Don't commit package-lock.json
* fix mangled pylint pragma
* Rebase against master and fix Gopkg.lock
* Code review feedback
* Fix a race between closing the callerEventsOpt channel and terminating a goroutine that writes to it
* glog -> logging
During a deployment, we end up making a bunch of PATCH/PUT/POST style
REST requests. For these calls, it should be safe to retry the
operation if there was a hickup during our REST call, so mark them as
retryable.
These changes add support for adding a tracing header to API requests
made to the Pulumi service. Setting the `PULUMI_TRACING_HEADER`
environment variable or enabling debug commands and passing the
`--tracing-header` will change the value sent in this header. Setting
this value to `1` will request that the service enable distributed
tracing for all requests made by a particular CLI invocation.
The newly added `pulumi config refresh` updates your local copy of the
Pulumi.<stack-name>.yaml file to have the same configuration as the
most recent deployment in the cloud.
This can be used in a varirty of ways. One place we plan to use it is
in automation to clean up "leaked" stacks we have in CI. With the
changes you'll now be able to do the following:
```
$ cd $(mktemp -d)
$ echo -e "name: who-cares\nruntime: nodejs" > Pulumi.yaml
$ pulumi stack select <leaked-stack-name>
$ pulumi config refresh -f
$ pulumi destroy --force
```
Having a simpler gesture for the above is something we'll want to do
long term (we should be able to support `pulumi destory <stack-name>`
from a completely empty folder, today you need a Pulumi.yaml file
present, even if the contents don't matter).
But this gets us a little closer to where we want to be and introduces
a helpful primitive in the system.
Contributes to #814
These changes add support for injecting client tracing spans into HTTP
requests to the Pulumi API. The server can then rematerialize these span
references and attach its own spans for distributed tracing.
This change captures the Git committer and author's login and email
addresses, so that we can display them prominently in the service. At
the moment, we only attribute updates to the identity that performed the
update which, in CI scenarios, is often always the same person for an
organization. This makes Pulumi look like a needlessly lonely place.
These changes enable tracing of Pulumi API calls.
The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.
In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
* Initialize a new stack as part of `pulumi new`
* Prompt for values with defaults preselected
* Install dependencies
* Prompt for default config values
This changes the CLI interface in a few ways:
* `pulumi preview` is back! The alternative of saying
`pulumi update --preview` just felt awkward, and it's a common
operation to want to perform. Let's just make it work.
* There are two flags consistent across all update commands,
`update`, `refresh`, and `destroy`:
- `--skip-preview` will skip the preview step. Note that this
does *not* skip the prompt to confirm that you'd like to proceed.
Indeed, it will still prompt, with a little warning text about
the fact that the preview has been skipped.
* `--yes` will auto-approve the updates.
This lands us in a simpler and more intuitive spot for common scenarios.
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
The service has no good way to preview a destroy operation for a stack
managed by a PPC. Until it does, just behave as if --force was passed
to the CLI in this case (i.e. skip the preview).
Fixes#1301
When linking back to the service, use the newer, simpler, URLs for a
stack: `https://pulumi.com/<owner>/<stack-name>` instead of
`https://pulumi.com/<owner>/-/-/<stack-name>`
Refactoring to support "preview" and "update" as part of the same
operation interacted poorly with deploying into a PPC via the service.
Per Pat, this is the quickest fix that gets us off the floor.
Part of #1301
This changes three minor UX things in the CLI's update flow:
1. Use bright blue for column headers, since the dark blue is nearly
invisible on a black background.
2. Move the operation symbol to the far left, as a sort of "bullet
point" for each line of output. This was how they initial symbols
were designed and this helps to glance at the summary to see what's
going on.
3. Shorten some of the column headers that didn't add extra clarity.
Pat ran into a weird error when trying to do some development agains
the testing cloud:
```
$ pulumi logout
$ pulumi login --cloud-url [test-cloud-url]
Logged into [test-cloud-url]
$ pulumi stack ls
Enter your Pulumi access token from https://pulumi.com/account:
```
In his case, we did not have `current` set in our credentials.json
file (likely due to him calling `pulumi logout` at some point) but we
did have stored credentials for that cloud. When he logged in the CLI
noticed we could reuse the stored credentials but did not update the
the current setting to set the current cloud.
While investigating, I also noticed that `logout` did not always do
the right thing when you were logged into a different backend than
pulumi.com
* Refactor the SnapshotManager interface
Lift snapshot management out of the engine by delegating it to the
SnapshotManager implementation in pkg/backend.
* Add a event interface for plugin loads and use that interface to record plugins in the snapshot
* Remove dead code
* Add comments to Events
* Add a number of tests for SnapshotManager
* CR feedback: use a successful bit on 'End' instead of having a separate 'Abort' API
* CR feedback
* CR feedback: register plugins one-at-a-time instead of the entire state at once
- Show Emojis on non-Windows platforms, instead of just macOS
- Change help text for `pulumi logs` to clarify the logs are specific
to a stack, not a project
- Display stack name when showing logs
- Have preamble text show to users for engine operations read a little
nicer
We were generating incorrect URLs for stacks on the new identity
model. When we don't have a repository in our ProjectIdentifier, the
URL's in the service use "-" for the repository and project names.
* Re-introduce interface for snapshot management
Snapshot management was done through the Update interface; this commit
splits it into a separate interface
* Put the SnapshotManager instance onto the engine context
* Remove SnapshotManager from planContext and updateActions now that it can be accessed by engine Context
PPCs are no longer a central concept to our model, but instead a
feature that that pulumi.com provides to some organizations. Let's
remove most mentions of PPCs except for cases where we really need to
talk about them (e.g. when a stack is actually hosted in a PPC instead
of just via the normal pulumi.com service)
Also remove some "in the Pulumi Cloud" messages from the CLI, as using
the Pulumi Cloud is now the only real way to use pulumi.
Fixespulumi/pulumi-service#1117
The cloud backend would fail if `nil` was passed in as the options to
use when creating a stack. However, we passed nil in places where we
allow the user to create stacks interacively. I noticed this while
dogfooding.
In the cloud backend, treat a `nil` opts as if the default set of
options was passed.