* Re-introduce interface for snapshot management
Snapshot management was done through the Update interface; this commit
splits it into a separate interface
* Put the SnapshotManager instance onto the engine context
* Remove SnapshotManager from planContext and updateActions now that it can be accessed by engine Context
PPCs are no longer a central concept to our model, but instead a
feature that that pulumi.com provides to some organizations. Let's
remove most mentions of PPCs except for cases where we really need to
talk about them (e.g. when a stack is actually hosted in a PPC instead
of just via the normal pulumi.com service)
Also remove some "in the Pulumi Cloud" messages from the CLI, as using
the Pulumi Cloud is now the only real way to use pulumi.
Fixespulumi/pulumi-service#1117
The cloud backend would fail if `nil` was passed in as the options to
use when creating a stack. However, we passed nil in places where we
allow the user to create stacks interacively. I noticed this while
dogfooding.
In the cloud backend, treat a `nil` opts as if the default set of
options was passed.
hese changes plumb basic support for cancellation through the engine.
Two types of cancellation are supported for all engine operations:
- Cancellation, which waits for the operation to drive itself to a safe
point before the operation returns, and
- Termination, which does not wait for the operation to drive itself
to a safe opint for the operation returns.
When updating local or managed stacks, a single ^C triggers cancellation
of any running operation; a second ^C will trigger termination.
Fixes#513, #1077.
This command cancels a stack's currently running update, if any. It can
be used to recover from the scenario in which an update is aborted
without marking the running update as complete. Once an update has been
cancelled, it is likely that the affected stack will need to be repaired
via an pair of export/import commands before future updates can succeed.
This is part of #1077.
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
This change introduces support for using the cloud backend when
`pulumi init` has not been run. When this is the case, we use the new
identity model, where a stack is referenced by an owner and a stack
name only.
There are a few things going on here:
- We add a new `--owner` flag to `pulumi stack init` that lets you
control what account a stack is created in.
- When listing stacks, we show stacks owned by you and any
organizations you are a member of. So, for example, I can do:
* `pulumi stack init my-great-stack`
* `pulumi stack init --owner pulumi my-great-stack`
To create a stack owned by my user and one owned by my
organization. When `pulumi stack ls` is run, you'll see both
stacks (since they are part of the same project).
- When spelling a stack on the CLI, an owner can be optionally
specified by prefixing the stack name with an owner name. For
example `my-great-stack` means the stack `my-great-stack` owned by
the current logged in user, where-as `pulumi/my-great-stack` would
be the stack owned by the `pulumi` organization
- `--all` can be passed to `pulumi stack ls` to see *all* stacks you
have access to, not just stacks tied to the current project.
Long term, a stack name alone will not be sufficent to address a
stack. Introduce a new `backend.StackReference` interface that allows
each backend to give an opaque stack reference that can be used across
operations.
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
This field indicates the schema of the serialized deployment. This field
behaves identically to the `Version` field of
`PatchUpdateCheckpointRequest`.
This is part of pulumi/pulumi-service#1046
And make the deployment an opaque JSON message. The verison field
indicates the schema of the deployment. A missing version field will
behave as if the version was set to `1`. A version of `1` indicates that
the serialized deployment has the `DeploymentV1` schema.
This is part of pulumi/pulumi-service#1046.
This makes two minor tweaks to the login prompt:
1. Change the text so that it hyperlinks in most terminals, including
iTerm, in a way that doesn't include excess characters.
2. Disable echoing of the token.
* Lift snapshot management out of the engine
This PR is a prerequisite for parallelism by addressing a major problem
that the engine has to deal with when performing parallel resource
construction: parallel mutation of the global snapshot. This PR adds
a `SnapshotManager` type that is responsible for maintaining and
persisting the current resource snapshot. It serializes all reads and
writes to the global snapshot and persists the snapshot to persistent
storage upon every write.
As a side-effect of this, the core engine no longer needs to know about
snapshot management at all; all snapshot operations can be handled as
callbacks on deployment events. This will greatly simplify the
parallelization of the core engine.
Worth noting is that the core engine will still need to be able to read
the current snapshot, since it is interested in the dependency graphs
contained within. The full implications of that are out of scope of this
PR.
Remove dead code, Steps no longer need a reference to the plan iterator that created them
Fixing various issues that arise when bringing up pulumi-aws
Line length broke the build
Code review: remove dead field, fix yaml name error
Rebase against master, provide implementation of StackPersister for cloud backend
Code review feedback: comments on MutationStatus, style in snapshot.go
Code review feedback: move SnapshotManager to pkg/backend, change engine to use an interface SnapshotManager
Code review feedback: use a channel for synchronization
Add a comment and a new test
* Maintain two checkpoints, an immutable base and a mutable delta, and
periodically merge the two to produce snapshots
* Add a lot of tests - covers all of the non-error paths of BeginMutation and End
* Fix a test resource provider
* Add a few tests, fix a few issues
* Rebase against master, fixed merge
We've seen failures in CI where DNS lookups fail which cause our
operations against the service to fail, as well as other sorts of
timeouts.
Add a set of helper methods in a new httputil package that helps us do
retries on these operations, and then update our client library to use
them when we are doing GET requests. We also provide a way for non GET
requests to be retried, and use this when updating a lease (since it
is safe to retry multiple requests in this case).
This change does three major things:
1. Removes the ability to be logged into multiple clouds at the same
time. Previously, we supported being logged into multiple clouds at
the same time and the CLI would fan out requests and join responses
when needed. In general, this was only useful for Pulumi employees
that wanted run against multiple copies of the service (say production
and staging) but overall was very confusing (for example in the old
world a stack with the same identity could appear twice (since it was
in two backends) which the CLI didn't handle very well).
2. Stops treating the "local" backend as a special thing, from the
point of view of the CLI. Previouly we'd always connect to the local
backend and merge that data with whatever was in clouds we were
connected to. We had gestures like `--local` in `pulumi stack init`
that meant "use the local mode". Instead, to use the local mode now
you run `pulumi login --cloud-url local://` and then you are logged in
the local backend. Since you can only ever be logged into a single
backend, we can remove the `--local` and `--remote` flags from `pulumi
stack init`, it just now requires you to be logged in and creates a
stack in whatever back end you were logged into. When logging into the
local backend, you are not prompted for an access key.
3. Prompt for login in places where you have to log in, if you are not
already logged in.
As part of the new identity model, we're going to use tagging on
stacks to record metadata, let's create a bag for that, as well as a
few well known tag names that map to metadata we know we'll want to set.
This change adds a Pulumi Cloud Console URL at the end of an update
that went through the Pulumi Cloud API. This is very basic, and is
meant to be just the beginning of adding more cross-linking to the
service. I'm still thinking through the phrasing to use here.
This endpoint is relatively expensive, as it returns a large amount of
data for each stack (essentially the stack's entire checkpoint). We
currently call this endpoint in a few places that really just need
information about a single stack. In these places we should simply
interact with that stack directly.
After these changes, there is only one call to the `stacks` endpoint on
the startup path: `upgradeConfigurationFiles` loops over all of the
stacks in each backend and attempts to upgrade each stack's
configuration. Once we no longer need to do this we will not be hitting
the `stacks` endpoint on each CLI invocation.
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
We previously locked our dependency on google.golang.org/grpc to 1.7.2 due to issues we had seen on 1.8.x as noted in #701. However, this has prevented us using some other dependencies which require newer grpc. A test in this repo and AWS showed no problems with the latest 1.10.x versions of the library in our tests.
We'll go ahead and remove this constraint and allow grpc to float forward. If we see issues again, we'll use that repro case to investigate an alternative fix in our code.
Resolves#701.
These changes add the API types and cloud backend code necessary to
interact with service-managed stacks (i.e. stacks that do not have
PPC-managed deployments). The bulk of these changes are unremarkable:
the API types are straightforward, as are most of the interactions with
the new APIs. The trickiest bits are token and log management.
During an update to a managed stack, the CLI must continually renew the
token used to authorize the operations on that stack that comprise the
update. Once a token has been renewed, the old token should be
discarded. The CLI supports this by running a goroutine that is
responsible for both periodically renewing the token for an update and
servicing requests for the token itself from the rest of the backend.
In addition to token renewal, log output must be captured and uploaded
to the service during an update to a managed stack. Implementing this in
a reasonable fashion required a bit of refactoring in order to reuse
what already exists for the local backend. Each event-specific `Display`
function was replaced with an equivalent `Render` function that returns
a string rather than writing to a stream. This approach was chosen
primarily to avoid dealing with sheared colorization tags, which would
otherwise require clients to fuse log lines before colorizing. We could
take that approach in the future.
These changes refactor direct interactions with the Pulumi API out of
the cloud backend and into a subpackage, `pkg/backend/cloud/client`.
This package exposes a slightly higher-level API that takes care of
calculating paths, performing HTTP calls, and occasionally wrapping
multiple physical calls into a single logical call (notably the creation
of an update and the upload of its program).
This is primarily intended as preparation for some of the changes
suggested in the feedback for #1067.
This adds a `pulumi new` command which makes it easy to quickly
automatically create the handful of needed files to get started building
an empty Pulumi project.
Usage:
```
$ pulumi new typescript
```
Or you can leave off the template name, and it will ask you to choose
one:
```
$ pulumi new
Please choose a template:
> javascript
python
typescript
```
The engine now emits events with richer metadata during the
ResourceOutputs and ResourcePre callbacks. The CLI can then use this
information to decide if it should display the event or not and how
much of the event to display.
Options dealing with what to display and how to display it have moved
into the CLI and the engine now emits all information for each event.
The engine now unconditionally emits a new type of event, a
PreludeEvent, which contains the configuration for a stack as well as
an indication if the stack is being previewed or updated. The
responsibility for interpreting the --show-config flag on the command
line is now handled by the CLI, which uses this to decide if it should
print the configuration or not, and then writes the "Previewing
changes" or "Deploying chanages" header.
This API was introduced to aid the refactoring, but it isn't something
we want to support long term. Remove it and for a few places, push
passing config.Key around more, instead of converting to the old type
eagerly.
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).
Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
By using untyped deployment structures via `json.RawMessage`, we can
support round-tripping between old CLI clients and newer servers, without
dropping possibly-important information on the floor. I hadn't realized
this design goal with the original system, and after talking to @pgavlin,
I better realized the intent and that we want to preserve this.
This change updates our configuration model to make it simpler to
understand by removing some features and changing how things are
persisted in files.
Notable changes:
- We've removed the notion of "workspace" vs "project"
config. Now, configuration is always stored in a file next to
`Pulumi.yaml` named `Pulumi.<stack-name>.yaml` (the same file we'd
use for an other stack specific information we would need to persist
in the future).
- We've removed the notion of project wide configuration. Every new
stack gets a completely empty set of configuration and there's no
way to share common values across stacks, instead the common value
has to be set on each stack.
We retain some of the old code for the configuration system so we can
support upgrading a project in place. That will happen with the next
change.
This change fixes some issues and allows us to close some
others (since they are no longer possible).
Fixes#866Closes#872Closes#731
We are going to be changing the configuration model. To begin, let's
take most of the existing stuff and mark it as "deprecated" so we can
keep the existing behavior (to help transition newer code forward)
while making it clear what APIs should not be called in the
implementation of `pulumi` itself.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
1. Various idiomatic Go and TypeScript fixes
2. Add an integration test that end-to-end roundtrips dependency
information for a simple Pulumi program
3. Add an additional test assert that tests that dependency information
comes from the language host as expected
This commit does two things:
1. All dependencies of a resource, both implicit and explicit, are
communicated directly to the engine when registering a resource. The
engine keeps track of these dependencies and ultimately serializes
them out to the checkpoint file upon successful deployment.
2. Once a successful deployment is done, the new `pulumi stack
graph` command reads the checkpoint file and outputs the dependency
information within in the DOT format.
Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
I was reminded of this yesterday with unprintable characters as I
debugged some things on Windows. Inspired by Yarn, this change adds
a new flag --emoji (-e for short) that can be used to control whether
we show ASCII-only characters or not in the console. On Mac, it
defaults to true, and on Windows and Linux, it defaults to false.
This also brings back the retro ASCII-friendly progress spinner
when --emoji is disabled.
Prior to this change, we had a flat list of files in the
~/.pulumi/plugins directory. This was simple but unfortunately
too naive, since we in fact have multi-file plugins already.
Dumping them in the same directory increases the risk of a
collision. Instead, let's put them in their own directories.
This means, for example, you'll see things like
~/.pulumi/plugins/
resource-aws-v0.11.0-dev-8-g57a0d62/
README.txt
pulumi-resource-aws
Notice that the binary name stays the same -- e.g., in this
case pulumi-resource-aws -- and does not include the version.
This makes it simple to add it to your $PATH in the usual ways
and have it loaded as a preferred location.
The API/REST logic auto-prepended "/api", which we don't want
for the release downloads. This change just alters callsites
to specify the full path (which I prefer being explicit anyway).
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
As it stands, we currently hammer the service's update logs endpoint in
a tight loop while waiting for a deployment to complete. This is not
necessary, and can indeed be deletrious to the user experience: it
appears that this may be exacerbating some mysterious 500 responses from
API gateway.
These changes add a brief sleep in the relevant loop that waits for 5
seconds if the last call produced new log entries or 15 seconds if it
did not.
Fixes#844.
Today we don't send any version information with API requests to the service, so we cannot make breaking changes between versions of the backend API while preserving backwards compatibility.
This PR adds a `User-Agent` header with REST requests that sends a CLI version number of "1". If the service were to make a breaking change, it could use this header to determine which response handler to use. (e.g. return a different response for "" or "1" and another for "2".) Obviously we want to avoid being in this situation, but in the event that we need to make a breaking change, we'll need this value.
We send the Pulumi version as well, though the SDK will probably rev much more quickly than the backend API client version.
Fixes#848
Previously, when uploading a projectm to the service, we would only
upload the folder rooted by the Pulumi.yaml for that project. This
worked well, but it meant that customers needed to structure their
code in a way such that Pulumi.yaml was always as the root of their
project, and if they wanted to share common files between two projects
there was no good solution for doing this.
This change introduces an optional piece of metadata, named context,
that can be added to Pulumi.yaml, which allows controlling the root
folder used for computing the root folder to archive from. When it is
set, it is combined with the location of the Pulumi.yaml file for the
project we are uploading and that folder is uses as the root of what
we upload to the service.
Fixes: #574
The existing logic would flow colorization information into the
engine, so depending on the settings in the CLI, the engine may or may
not have emitted colorized events. This coupling is not great and we
want to start moving to a world where the presentation happens
exclusively at the CLI level.
With this change, the engine will always produce strings that have the
colorization formatting directives (i.e. the directives that
reconquest/loreley understands) and the CLI will apply
colorization (which could mean either running loreley to turn the
directives into ANSI escape codes, or drop them or retain them, for
debuging purposes).
Fixes#742
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
Previously, the `pulumi` tool did not show any indication of progress
when doing a deployment. Combined with the fact that we do not create
resources in parallel it meant that sometime `pulumi` would appear to
hang, when really it was just waiting on some resource to be created
in AWS. In addition, some AWS resources take a long time to create and
CI systems like travis will kill the job if there is no output. This
causes us (and our customers) to have to do crazy dances where we
launch shell scripts that write a dot to the console every once in a
while so we don't get killed. While we plan to overhaul the output
logic (see #617), we take a first step towards interactivity by simply
having a nice little spinner (in the interactive case) and when run
non interactive have `pulumi` print a message that it is still
working.
Fixes#794
Surprisingly `pulumi login -c https://google.com` would succeed. This was because we were too lax in our way of validating credentials. We take the provided cloud URL and call the "GetCurrentUserHandler" method. But we were only checking that it returned a successful response, not that it was actually valid JSON.
So in the "https://google.com" case, Google returned HTML describing a 404 error, but since the sever response was 200, the Pulumi CLI assumed things were on the up and up.
We now parse the response as JSON, and confirm the response has a `name` property that is non-nil. This heuristic covers the majority of false-positive cases, but without us needing to move all of the service's API shape for users, which includes organizations, which includes Clouds, etc. into `pulumi`.
Fixes https://github.com/pulumi/pulumi-service/issues/457. As an added bonus, we now return a much more useful error message.
This PR surfaces the configuration options available to updates, previews, and destroys to the Pulumi Service. As part of this I refactored the options to unify them into a single `engine.UpdateOptions`, since they were all overlapping to various degrees.
With this PR we are adding several new flags to commands, e.g. `--summary` was not available on `pulumi destroy`.
There are also a few minor breaking changes.
- `pulumi destroy --preview` is now `pulumi destroy --dry-run` (to match the actual name of the field).
- The default behavior for "--color" is now `Always`. Previously it was `Always` or `Never` based on the value of a `--debug` flag. (You can specify `--color always` or `--color never` to get the exact behavior.)
Fixes#515, and cleans up the code making some other features slightly easier to add.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
These changes add the ability to export a stack's latest deployment or
import a new deployment to a stack via the Pulumi CLI. These
capabilities are exposed by two new verbs under `stack`:
- export, which writes the current stack's latest deployment to stdout
- import, which reads a new deployment from stdin and applies it to the
current stack.
In the local case, this simply involves reading/writing the stack's
latest checkpoint file. In the cloud case, this involves hitting two new
endpoints on the service to perform the export or import.
This change incorporates feedback on https://github.com/pulumi/pulumi/pull/764,
in addition to refactoring the retry logic to use our retry framework rather
than hand-rolling it in the REST API code. It's a minor improvement, but at
least lets us consolidate some of this logic which we'll undoubtedly use more
of over time.
We saw an issue where a user was mid-update, and got a networking
error stating `read: operation timed out`. We believe this was simply
a local client error, due to a flaky network. We should be resilient
to such things during updates, particularly when there's no way to
"reattach" to an in-progress udpate (see pulumi/pulumi#762).
This change accomplishes this by changing our retry logic in the
cloud backend's waitForUpdates function. Namely:
* We recognize three types of failure, and react differently:
- Expected HTTP errors. For instance, the 504 Gateway Timeouts
that we already retried in the face of. In these cases, we will
silently retry up to 10 times. After 10 times, we begin warning
the user just in case this is a persistent condition.
- Unexpected HTTP errors. The CLI will quit immediately and issue
an error to the user, in the usual ways. This covers
Unauthorized among other things. Over time, we may find that we
want to intentionally move some HTTP errors into the above.
- Anything else. This covers the transient networking errors case
that we have just seen. I'll admit, it's a wide net, but any
instance of this error issues a warning and it's up to the user
to ^C out of it. We also log the error so that we'll see it if
the user shares their logs with us.
* We implement backoff logic so that we retry very quickly (100ms)
on the first failure, and more slowly thereafter (1.5x, up to a max
of 5 seconds). This helps to avoid accidentally DoSing our service.
Use the new {en,de}crypt endpoints in the Pulumi.com API to secure
secret config values. The ciphertext for a secret config value is bound
to the stack to which it applies and cannot be shared with other stacks
(e.g. by copy/pasting it around in Pulumi.yaml). All secrets will need
to be encrypted once per target stack.
This change implements resource protection, as per pulumi/pulumi#689.
The overall idea is that a resource can be marked as "protect: true",
which will prevent deletion of that resource for any reason whatsoever
(straight deletion, replacement, etc). This is expressed in the
program. To "unprotect" a resource, one must perform an update setting
"protect: false", and then afterwards, they can delete the resource.
For example:
let res = new MyResource("precious", { .. }, { protect: true });
Afterwards, the resource will display in the CLI with a lock icon, and
any attempts to remove it will fail in the usual ways (in planning or,
worst case, during an actual update).
This was done by adding a new ResourceOptions bag parameter to the
base Resource types. This is unfortunately a breaking change, but now
is the right time to take this one. We had been adding new settings
one by one -- like parent and dependsOn -- and this new approach will
set us up to add any number of additional settings down the road,
without needing to worry about breaking anything ever again.
This is related to protected stacks, as described in
pulumi/pulumi-service#399. Most likely this will serve as a foundational
building block that enables the coarser grained policy management.
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw). The events from the service, however, are still
boolean-valued. This changes the message payload to carry the full values.
The prior behavior with cloud authentication was a bit confusing
when authenticating against anything but https://pulumi.com/. This
change fixes a few aspects of this:
* Improve error messages to differentiate between "authentication
failed" and "you haven't logged into the target cloud URL."
* Default to the cloud you're currently authenticated with, rather
than unconditionally selecting https://pulumi.com/. This ensures
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls
works, versus what was currently required
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls -c https://api.moolumi.io
with confusing error messages if you forgot the second -c.
* To do this, our default cloud logic changes to
1) Prefer the explicit -c if supplied;
2) Otherwise, pick the "currently authenticated" cloud; this is
the last cloud to have been targeted with pulumi login, or
otherwise the single cloud in the list if there is only one;
3) https://pulumi.com/ otherwise.
As articulated in #714, the way config defaults to workspace-local
configuration is a bit error prone, especially now with the cloud
workflow being the default. This change implements several improvements:
* First, --save defaults to true, so that configuration changes will
persist into your project file. If you want the old local workspace
behavior, you can specify --save=false.
* Second, the order in which we applied configuration was a little
strange, because workspace settings overwrote project settings.
The order is changed now so that we take most specific over least
specific configuration. Per-stack is considered more specific
than global and project settings are considered more specific
than workspace.
* We now warn anytime workspace local configuration is used. This
is a developer scenario and can have subtle effects. It is simply
not safe to use in a team environment. In fact, I lost an arm
this morning due to workspace config... and that's why you always
issue warnings for unsafe things.
If the service returns a 504, we happily keep looping around and
retrying until we get a valid update. Unfortunately, we missed the
else condition, which is what happens when this isn't a 504, leading
us to swallow real errors (500 and the like). Trivial fix.
Fixespulumi/pulumi#712.
If a cloud you've previously authenticated with goes away -- as ours
sort of did, because the cloud endpoing in the CLI changed (to actually
be correct) -- then you can't logout without manually editing the
credentials file in your workspace. This is a little annoying. So,
rather than that, let's have a `pulumi logout --all` command that just
logs out of all clouds you are presently authenticated with.
- Fix#644 "Re-enable use of local dev stacks". Rather than trying to be smart about using "pulumi.com" and inferring "api.pulumi.com", we just use whatever cloud URL the user provides. (e.g. "http://localhost:8080") We can improve the user experience later by providing friendly names for these things upon login. So we can show "Pulumi" or "Contoso" instead of the URL -- but let's cross that bridge later.
- Fix#640, "Better error on `pulumi login`". We only provide the more friendly error about needing to login IFF you are trying to use an authenticated Pulumi API without having creds.
As documented in issue #616, the inputs/defaults/outputs model we have
today has fundamental problems. The crux of the issue is that our
current design requires that defaults present in the old state of a
resource are applied to the new inputs for that resource.
Unfortunately, it is not possible for the engine to decide which
defaults remain applicable and which do not; only the provider has that
knowledge.
These changes take a more tactical approach to resolving this issue than
that originally proposed in #616 that avoids breaking compatibility with
existing checkpoints. Rather than treating the Pulumi inputs as the
provider input properties for a resource, these inputs are first
translated by `Check`. In order to accommodate provider defaults that
were chosen for the old resource but should not change for the new,
`Check` now takes the old provider inputs as well as the new Pulumi
inputs. Rather than the Pulumi inputs and provider defaults, the
provider inputs returned by `Check` are recorded in the checkpoint file.
Put simply, these changes remove defaults as a first-class concept
(except inasmuch as is required to retain the ability to read old
checkpoint files) and move the responsibilty for manging and
merging defaults into the provider that supplies them.
Fixes#616.
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
This change does two things:
1) Adds progress reporting to our uploads.
2) Eliminate the sleeps that burned 7 seconds at the front of
any cloud update, needlessly. It's actually impressively
fast without these!
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.