* Install missing plugins on startup
This commit addresses the problem of missing plugins by scanning the
snapshot and language host on startup for the list of required plugins
and, if there are any plugins that are required but not installed,
installs them. The mechanism by which plugins are installed is exactly
the same as 'pulumi plugin install'.
The installation of missing plugins is best-effort and, if it fails,
will not fail the update.
This commit addresses pulumi/pulumi-azure#200, where users using Pulumi
in CI often found themselves missing plugins.
* Add CHANGELOG
* Skip downloading plugins if no client provided
* Reduce excessive test output
* Update Gopkg.lock
* Update pkg/engine/destroy.go
Co-Authored-By: swgillespie <sean@pulumi.com>
* CR: make pluginSet a newtype
* CR: Assign loop induction var to local var
Various providers use properties that begin with "__" to represent
internal metadata that should not be presented to the user. These
changes look for such properties and elide them when displaying diffs.
This commit re-uses an error reporting mechanism previously used when
the plugin loader fails to locate a plugin that is compatible with the
requested plugin version. In addition to specifying what version we
attempted to load, it also outputs a command that will install the
missing plugin.
`edit.RenameStack` walks a Snapshot and rewrites all of the parts
where a stack name is present (URNs, the ID of the top level Stack
resource, providers)
These changes take advantage of the newly-added support for returning
inputs from Read to update a resource's inputs as part of a refresh.
As a consequence, the Pulumi engine will now properly detect drift
between the actual state of a resource and the desired state described
by the program and generate appropriate update or replace steps.
As part of these changes, a resource's old inputs are now passed to the
provider when performing a refresh. The provider can take advantage of
this to maintain the accuracy of any additional data or metadata in the
resource's inputs that may need to be updated during the refresh.
This is required for the complete implementation of
https://github.com/pulumi/pulumi-terraform/pull/349. Without access to
the old inputs for a resource, TF-based providers would lose all
information about default population during a refresh.
If a provider returns information about the top-level properties that
differ, use those keys to filter the diffs that are rendered to the
user.
Fixes#2453.
* Look for exact match when loading plugins
Pulumi's current behavior when loading plugins is surprising in that it
will attempt to load the "latest" provider binary instead of exactly the
version that was requested. Since provider binaries and provider
packages are tied together and versioned together, this is going to be
problematic if a provider makes a breaking change.
Although there are other issues in this area, this commit fixes the
arguably bug-like behavior of loading the latest plugin and instead opts
to load the plugin that exactly the requested semver range. Today, the
engine will never ask for anything other than an exact version match.
Since this is a breaking change, this commit also includes an
environment variable that allows users to return back to the "old"
plugin loading behavior if they are broken. The intention is that this
escape hatch can be removed in a future release once we are confident
that this change does not break people.
* CR feedback
* Use SelectCompatiblePlugin for HasPluginGTE check
* Decrease log level for HTTP requests and responses
Logging each HTTP request and response can get quite chatty, especially
when publishing a lot of events. This increases the verbosity level of
these logs so that they don't get emitted at level 9, which is the
general level that providers use when issuing verbose logs.
* Appease linter
We previously logged the number of replaces and changes returned from a call to Diff, but not the actual properties that were forcing replace. Several times we've had to debug issues with unexpected replaces being proposed, and this information is very useful to have access to.
Changes the verbose logging to include the property names for both replaces and changes instead of just the count.
* Enable delete parallelism for Python
* Add CHANGELOG.md entry
* Expand changelog message - upgrade to Python 3
* Rework stack rm test
The service now allows removing a stack if it just contains the top
level `pulumi:pulumi:Stack` resource, so we need to actually create
another resource before `stack rm` fails telling you to pass
`--force`.
Fixes#2444
When `pulumi stack rm` is run against a stack with resources, the
service will respond with an error if `--force` is not
passed. Previously we would just dump the contents of this error and
it looked something like:
`error: [400] Bad Request: Stack still has resources.`
We now handle this case more gracefully, showing our usual "this stack
still has resources" error like we would for the local backend.
Fixes#2431
These changes add a new flag to the various `ResourceOptions` types that
indicates that a resource should be deleted before it is replaced, even
if the provider does not require this behavior. The usual
delete-before-replace cascade semantics apply.
Fixes#1620.
In general, a "delete" in Pulumi is destroying an actual physical
resource. In the case of a read resource, however, the delete is
merely removing the resource from the stack; the physical resource
is not affected. These changes attempt to clarify this situation by
using the term "discard" rather than "delete".
Fixes#2015.
Because of the change to include a stack's project as part of its
identity in the service, the names passed to StackReference now
require the project name as well.
Improve the error message when they do not include them.
- Add support for per-property dependencies to the Go SDK
- Add tests for first-class secret rejection in the checkpoint and RPC
layers and language SDKs
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
The service also does this filtering on requests, because we'll need
to support older clients, but it would be nice if the CLI itself also
cleaned things up.
This change starts to use a stack's project name as part of it's
identity when talking to the cloud backend, which the Pulumi Service
now supports.
When displaying or parsing stack names for the cloud backend, we now
support the following schemes:
`<stack-name>`
`<owner-name>/<stack-name>`
`<owner-name>/<project-name>/<stack-name>`
When the owner is not specificed, we assume the currently logged in
user (as we did before). When the project name is not specificed, we
use the current project (and fail if we can't find a `Pulumi.yaml`)
Fixes#2039
Resources gain two new fields: `PropertyDependencies` and
`PendingReplacement`. The former maps an input property's name to the
dependencies that may affect the value of that property. The latter is
used to track resources that have been deleted as part of a
delete-before-replace operation but have not yet been recreated.
In addition to the new fields, resource properties may now contain
encrypted first-class secret values. These values are of type `SecretV1`,
where the `Sig` field is set to `resource.SecretSig`.
Finally, the deployment type gains a new field, `SecretsProviders`,
which contains any configuration necessary to handle secrets that may be
present in resource properties.
* Use both a in-proc and out-of-proc pipenv lock
Turns out that flock alone is not sufficient to guarantee exclusive
access to a resource within a single process. To remedy this, a few
FileMutex type wraps both an in-proc mutex and an out-of-proc
file-backed mutex to achieve the goal of exclusive access to a resource
in both in-proc and out-of-proc scenarios.
This commit also uses this lock globally in the integration test
framework in order to globally serialize invocations of pipenv install.
* Remove merge markers
* Use a file lock for serializing pipenv installs
A in-process mutex is not sufficient for serializing pipenv installs
because the 1) go test runner occasionally will split test executions
into multiple processes and 2) each test gets an instance of a
programTester and we'd need to share the mutex globally if we wanted to
successfully serialize access to the pipenv install command.
* Please linter
When constructing an Archive based off a directory path, we would
ignore any symlinks that we saw while walking the file system
collecting files to include in the archive.
A user reported an issue where they were unable to use the
[sharp](https://www.npmjs.com/package/sharp) library from NPM with a
lambda deployed via Pulumi. The problem was that the library includes
native components, and these native components include a bunch of
`*.so` files. As is common, there's a regular file with a name like
`foo.so.1.2.3` and then symlinks to that file with the names
`foo.so.1.2`, `foo.so.1` and `foo.so`. Consumers of these SOs will
try to load the shorter names, i.e. `foo.so` and expect the symlink to
resolve to the actual library.
When these links are not present, upstack code fails to load.
This changes modifies our logic such that if we have a symlink and it
points to a regular file, we include it in the archive. At this time,
we don't try to add it to the archive as a symlink, instead we just
add it as another copy of the regular file. We could explore trying to
include these things as symlinks in archive formats that allow
them (While zip does support this, I'm less sure doing this for zip
files will be a great idea, given the set of tricks we already play to
ensure our zip files are usable across many cloud vendors serverless
offerings, it feels like throwing symlinks into the mix may end up
causing even more downstream weirdness).
We continue to ignore symlinks which point at directories. In
practice, these seem fairly uncommon and doing so lets us not worry
about trying to deal with ensuring we don't end up chasing our tail in
cases where there are circular references.
While this change is in pulumi/pulumi, the downstream resource
providers will need to update their vendored dependencies in order to
pick this up and have it work end to end.
Fixes#2077
setuptools's "develop" action is not safe to run concurrently when
targeting the same source tree. In order to work around this, this
commit explicitly serializes package installations.
- Ensure new projects have a project name in line with what we'd like
to enforce going forward
- Do more aggresive validation during the interactive prompts during
`pulumi new`
- Fix an issue where the interactive prompt rendered weridly when
there was a validation error
Contributes to #1988Fixes#1441
We continue to do so when `--debug` has been passed, similar to how
these events are elided from the local display when you are not in a
debug context.
* Initial stack history command
* Adding use of color pkg, adding background colors to color pkg, and removing extra stack output
* gofmt-ed colors file
* Fixing format and removing JSON output
* Fixing nits, changing output for environment, and adding some tests
* fixing failing history test
This value was never used before, but it had a shorter name. In other
API Types we are using `projectName` which we all prefer. Since we are
going to start using this value going forward, let's adopt the good
name now when it won't break anyone.
This will allow the service to include information about what project
a stack is assocated with when listing all stacks a user has access
to.
This was not previously needed because the project did not play into
the stack identity, but it will shortly.
This test had been intermittently failing due to a race condition. Its
implementation of `plugin.Provider.Read` was intended to ensure that
the cancellation of a refresh operation occurred. As written, it was
only able to ensure that the cancellation was requested.
These changes ensure that cancellation has been acknowledged by the engine by
implementing providing an implementation for `plugin.Provider.Cancel`
that closes a channel on which the implementation of `Read` waits.
We're changing /account to redirect to /account/profile instead of
/account/tokens as the user profile settings are a more natural place
to land when going into account settings.
This commit changes the CLI to link
directly to /account/tokens, avoiding having to click on
"Access Tokens" to go to the tokens page to get an access token when
coming from the URL outputted by the CLI.
When returning immediately from the loop, we are closing the `done` channel early. This signals that we have finished processing every engine event, however that isn't true. Since some events may still be in-flight in `recordEngineEvent`. (This could potentially lead to a race condition in the `diag.Sink` passed to the API client used to record the call.)
This ensures that the gRPC server is properly shut down. This fixes an
issue in which a resource plugin that is still configuring could report
log messages to the plugin host, which would in turn attempt to send
diagnostic packets over a closed channel, causing a panic.
Fixes#2170.
The event rendering goroutine in the remote backend was not properly
synchronizing with the goroutine that created it, and could continue
executing after its creator finished. I believe that this is the root
cause of #1850.
When reading values like access keys or secrets from the terminal, we
would use the `terminal.ReadPassword` function to ensure characters
the user typed were not echo'd back to the console, as a convience.
When standard input was not connected to a tty (which would happen in
some cases like in docker when -t was not passed or in CI), this would
fail with an error about an bad ioctl. Update our logic such that
when standard in is not connected to a terminal, we just read input
normally.
While I was in the area, I unified the code for Windows and *NIX for
these functions.
Fixes#2017
This option allows the user to override the file used to fetch and store
configuration information for a stack. It is available for the config,
destroy, logs, preview, refresh, and up commands.
Note that this option is not persistent: if it is not specified, the
stack's default configuration will be used. If an alternate config file
is used exclusively for a stack, it must be specified to all commands
that interact with that stack.
This option can be used to share plaintext configuration across multiple
stacks. It cannot be used to share secret configuration, as secrets are
associated with a particular stack and cannot be decryptex by other
stacks.
When the install fails, we end up printing the entire contents of
yarn's stdout to stdout. This output can often be quite long and will
cause Travis to fail in some cases.
The regular error output should be sufficent for us to diagnose any
issues we'll face.
Add a new property to ProgramTestOptions, `Overrides` that allows a
test to request a different version of a package is used instead of
what would be listed in the package.json file.
This will be used by our nightly automation to run everything "at head"
Semver allows you to attach "build metadata" to a version by appending
the version with `+` and then metadata. In #2216 we started to take
advantage of this as the place to put the git commit information,
instead of including it as part of the "version". This is more in line
with what Semver expects to be done, because git commit information
isn't orderable.
Because of this, we started to publish plugins with versions like
`v0.16.5-dev.1542649729+g07d8224`. However, our logic for discovering
plugins in the cache did an initial filtering based on folder names in
the cache and the regex did not allow a + in the "version" field.
This meant that from the point of view of the cache, the plugin was
not present. This would lead to very confusing behavior where
something like `pulumi plugin install resource azure
v0.16.5-dev.1542649729+g07d8224` would download the plugin, but
`pulumi plugin ls` would not see it and attempting to do an update
with it would fail with an error saying the plugin was not installed.
This change relaxes the regular expression to allow it to match these
sorts of paths. We still use the `semver` library to ensure that the
version we've extracted from the directory name is a valid semver.
After #2088, we began calling `Diff` on providers that are not configured
due to unknown configuration values. This hit an assertion intended to
detect exactly this scenario, which was previously unexpected.
These changes adjust `Diff` to indicate that a Diff is unavailable and
return an error message that describes why. The step generator then
interprets the diff as indicating a normal update and issues the error
message to the diagnostic stream.
Fixes#2223.
Configuration keys are simple namespace/name pairs, delimited by
":". For compatability, we also allow
"<namespace>:config:<name>", but we always record the "nice" name in
`Pulumi.<stack-name>.yaml`.
While `pulumi config` and friends would block setting a key like
`a🅱️c` (where the "name" has a colon in it), it would allow
`a:config:b:c`. However, this would be recorded as `a🅱️c` in
`Pulumi.<stack-name>.yaml`, which meant we'd error when parsing the
configuration file later.
To work around this, disallow ":" in the "name" part of a
configuration key. With this change the following all work:
```
keyName
my-project:keyName
my-project:config:keyName
```
However, both
`my-project:keyName:subKey`
`my-project:config:keyName:subKey`
are now disallowed.
I considered allowing colons in subkeys, but I think it adds more
confusion (due to the interaction with how we allow you elide the
project name in the default case) than is worthwhile at this point.
Fixes#2171
When launching plugins today, `pulumi` looks in two places:
1. It looks to see if the plugin in on the $PATH and if so, uses
it. This makes it easy to force a specific version of a resource
provider to be used and is what happens at development time (since
resource providers make their way onto $PATH via GOBIN).
2. If the above fails, it looks in the "plugin cache" in
`~/.pulumi/plugins`. This is the location that `pulumi plugin
install` places plugins.
Unlike resource provider plugins, we don't yet deliver language
plugins via `pulumi plugin install` so the language provider plugins
must be on the `$PATH` to be found. This is okay, because when we ship
the SDK, we include the executables next to `pulumi` itself.
However, if a user chooses to not put `pulumi` on their $PATH, or they
do but it is a symlink to the real `pulumi` binary installed
somewhere, we'd fail to find the language plugins, since they would
not be on the `$PATH`
To address this, when probing for language plugins, also consider
binaries next to the currently running `pulumi` process.
Fixes#1956
While the lifecycle tests wrote a `.yarnrc` file to ensure that copies
of `yarn` did not race with one another, the more barebones testing
framework did not.
This should address some of the yarn issues we've been seeing in CI
recently
Under our old versioning system, when we started a new point release,
we'd tag the HEAD commit of master with a tag like `v0.16.6-dev` and
our scripts would use this to generate a new version number. This
required a great deal of gymnastics when producing a release and
caused us to litter these -dev tags everywhere.
To improve this, we change version number generation to the following
strategy:
1. If the commit we are building has a tag applied to it, use that tag
as the version (appending the dirty bit metadata to the version, if
needed).
2. If the commit we are bulding does not have a tag applied to it,
take the version from the next reachable tag, increment the patch
version and then append the `-dev` pre-release tag. As part of this,
we also make a slight tweek to our semver generation such that instead
of `-dev<TIMESTAMP>` we use `-dev.<TIMESTAMP>` which is more in line
with what semver recommends.
The issue is related to this code:
https://github.com/pulumi/pulumi/blob/v0.16.4/pkg/workspace/plugins.go#L155-L195
Note that we use `defer` to ensure we close our handle to the file we
are unpacking when we encounter a file in the tarball. However, the
defers don't run until the containing function ends, so when we go to
do the rename, or process still has a bunch of open file handles, which
prevents the directory from being renamed because it is "in use".
By doing all of the work in an anonymous function, we ensure that the
defer statements run before we go to rename the directory
Fixes#2217
This is code that should have been part of #2211 but was accidently
dropped during a rebase when responding to CR feedback.
When two installs for the same plugin are racing, the second one will
see the destination directory already exists and fail. We can safely
ignore this error.
These changes add a new resource to the Pulumi SDK,
`pulumi.StackReference`, that represents a reference to another stack.
This resource has an output property, `outputs`, that contains the
complete set of outputs for the referenced stack. The Pulumi account
performing the deployment that creates a `StackReference` must have
access to the referenced stack or the call will fail.
This resource is implemented by a builtin provider managed by the engine.
This provider will be used for any custom resources and invokes inside
the `pulumi:pulumi` module. Currently this provider supports only the
`pulumi:pulumi:StackReference` resource.
Fixes#109.
We run the same suite of changes that we did on gometalinter. This
ended up catching a few new issues, some of which were addressed and
some of which were baselined.
The lockfile is not super interesting when running the lifecycle
tests, since the test project is not a long lived artifact. There was
no real harm in writing it, exepct that in some cases `pipenv` gets
very confused when you have dependencies on pre-release
versions (which is the case right now as we are brining up Python 3).
The packages are still installed correctly, even when we don't write a
lock file.
* Fix Python support in integration test framework
Update the integratino test framework to use pipenv to bootstrap new
virtual environments for tests and use those virtual environments to run
pulumi update and pulumi preview.
Fixespulumi/pulumi#2138
* Install packages via 'Dependencies' field
* Remove code for installing packages from Dependencies
In preparation for some workspace restructuring, I decided to scratch a
few itches of my own in the code:
* Change project's RuntimeInfo field to just Runtime, to match the
serialized name in JSON/YAML.
* Eliminate the no-longer-used Context and NoDefaultIgnores fields on
project, and all of the associated legacy PPC-related code.
* Eliminate the no-longer-used IgnoreFile constant.
* Remove a bunch of "// nolint: lll" annotations, and simply format
the structures with comments on dedicated lines, to avoid overly
lengthy lines and lint suppressions.
* Mark Dependencies and InitErrors as `omitempty` in the JSON
serialization directives for CheckpointV2 files. This was done for
the YAML directives, but (presumably accidentally) omitted for JSON.
Go 1.10 made some breaking changes to the headers in archive/tar [1] and
archive/zip [2], breaking the expected values in tests. In order to keep
tests passing with both, wherever a hardcoded hash is expect we switch
on `runtime.Version()` to select whether we want the Go 1.9 (currently
supported Go version) or later version of the hash.
Eventually these switches should be removed in favour of using the later
version only, so they are liberally commented to explain the reasoning.
[1]: https://golang.org/doc/go1.10#archive/tar
[2]: https://golang.org/doc/go1.10#archive/zip
Downlevel versions of the Pulumi Node SDK assumed that a parallelism
level of zero implied serial execution, which current CLIs use to signal
unbounded parallelism. This commit works around the downlevel issue by
using math.MaxInt32 to signal unbounded parallelism.
In the past, we had a mode where the CLI would upload the Pulumi
program, as well as its contents and do the execution remotely.
We've since stopped supporting that, but all the supporting code has
been left in the CLI.
This change removes the code we had to support the above case,
including the `pulumi archive` command, which was a debugging tool to
generate the archive we would have uploaded (which was helpful in the
past to understand why behavior differed between local execution and
remote execution.)
If you took the time to type out `--skip-preview`, we should have high
confidence that you don't want a preview and you're okay with the
operation just happening without a prompt.
Fixes#1448
* Remove TODO for issue since fixed in PPCs.
* Update issue reference to source
* Update comment wording
* Remove --ppc arg of stack init
* Remove PPC references in int. testing fx
* Remove vestigial PPC API types
* Introduce new metadata keys `vcs.repo`, `vcs.kind` and `vcs.owner` to keep the keys generic for any vcs. Expanded the git SSH regex to account for bitbucket's .org domain.
* Introduce new stack tags keys with the same theme of detecting the vcs.
* Enable gzip compression on the wire
This change allows the Pulumi API client to gzip requests sent to the
Pulumi service if requested using the 'GzipCompress' http option.
This change also sets the Accept-Encoding: gzip header for all requests
originating from the CLI, indicating to the service that it is free to
gzip responses. The 'readBody' function is used in the API client to
read a response's body, regardless of how it is encoded.
Finally, this change sets GzipCompress: true on the
'PatchUpdateCheckpoint' API call, since JSON payloads in that call tend
to be large and it has become a performance bottleneck.
* spelling
* CR feedback:
1. Clarify and edit comments
2. Close the gzip.Reader when reading bodies
3. Log the payload size when logging compression ratios
Providers with unknown properties are currently considered to require
replacement. This was intended to indicate that we could not be sure
whether or not replacement was reqiuired. Unfortunately, this was not a
good user experience, as replacement would never be required at runtime.
This caused quite a bit of confusion--never proposing replacement seems
to be the better option.
The provider registry was checking for a `nil` provider instance before
checking for a non-nil error. This caused the CLI to fail to report
important errors during the plugin load process (e.g. invalid checkpoint
errors) and instead report a failure to find a matching plugin.
Some providers (namely Kubernetes) require unbounded parallelism in
order to function correctly. This commit enables the engine to operate
in a mode with unbounded parallelism and switches to that mode by
default.
* Add new 'pulumi state' command for editing state
This commit adds 'pulumi state unprotect' and 'pulumi state delete', two
commands that can be used to unprotect and delete resources from a
stack's state, respectively.
* Simplify LocateResource
* CR: Print yellow 'warning' before editing state
* Lots of CR feedback
* CR: Only delete protected resources when asked with --force
Last-minute events coming through the engine could cause the goroutine
iterating over engineEvents to write to displayEvents after it has
already been closed by the main goroutine.
Whenever an update fails partially, it gives the engine back some state
bag of outputs that should be persisted to the snapshot. When saving
this state, we shouldn't save the inputs that triggered the update that
failed, since the resource that exists will never have been updated
successfully with those inputs.
Instead of saving the new inputs on partial failed updates, this commit
saves the old inputs and the new outputs. The new outputs will likely
need to be refreshed to be useful, but the old inputs will be correct
from the perpsective of the Pulumi program that generated the last
successful update.
Fixespulumi/pulumi#2011
This adds an option, --suppress-outputs, to many commands, to
avoid printing stack outputs for stacks that might contain sensitive
information in their outputs (like key material, and whatnot).
Fixespulumi/pulumi#2028.
This change adds GitLab CI support, by sniffing out the right
variables (equivalent to what we already do for Travis).
I've also restructured the code to share more logic with our
existing CI detection code, now moved to the pkg/util/ciutil
package, and will be fleshing this out more in the days to come.
There is a seldom-used capability in our CLI, the ability to pass
-m to specify an update message, which we will then show prominently.
At the same time, we already scrape some interesting information from
the Git repo from which an update is performed, like the SHA hash,
committer, and author information. We explicitly didn't want to scrape
the entire message just in case someone put sensitive info inside of it.
It seems safe -- indeed, appealing -- to use just the title portion
as the default update message when no other has been provided (the
majority case). We'll work on displaying it in a better way, but this
strengthens our GitOps/CI/CD story.
Fixespulumi/pulumi#2008.
It's not good practice to dirty up the stdout stream with messages
like this. In fact, I questioned whether we should be emitting
*anything* here, but given that this is often used in unattended
environments, coupled with the fact that it's easy to accidentally set
this and then wonder why `pulumi login` is silently returning, led
me to keep it in (for now, at least).
The preview will proceed as if the operations had not been issued (i.e.
we will not speculate on a new state for the stack). This is consistent
with our behavior prior to the changes that added pending operations to
the checkpoint.
This change stops attempting to pop a web browser in non-interactive
sessions. Instead, the PULUMI_ACCESS_TOKEN environment variable must
be set. Otherwise, any attempt to use the CLI will yield
$ pulumi --non-interactive preview
error: PULUMI_ACCESS_TOKEN must be set for login during non-interactive CLI sessions
This is the behavior we want for Docker-based invocations of the CLI,
and so is part of pulumi/pulumi#1991.
The diff display code was not expecting that it would be possible for
resource properties to transition from being an archive to being an
asset, or the other way around. This commit prints out a reasonable diff
if this situation occurs instead of crashing.
* Process deletions conservatively in parallel
This commit allows the engine to conservatively delete resources in
parallel when it is sure that it is legal to do so. In the absence of a
true data-flow oriented step scheduler, this approach provides a
significant improvement over the existing serial deletion mechanism.
Instead of processing deletes serially, this commit will partition the
set of condemned resources into sets of resources that are known to be
legally deletable in parallel. The step executor will then execute those
independent lists of steps one-by-one until all steps are complete.
* CR: Make ResourceSet a normal map
* Only use the dependency graph if we can trust it
* Reverse polarity of pendingDeletesAreReplaces
* CR: un-export a few types
* CR: simplify control flow in step generator when scheduling
* CR: parents are dependencies, fix loop index
* CR: Remove ParentOf, add new test for parent dependencies
API calls agains the Pulumi service may start setting a new header,
`X-Pulumi-Warning`. The value of this header should be presented to
the user as a warning.
The Service will use this to provide additional information to the
user without having the CLI have to know about every specific warning
path.
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
Now that we're showing SpecUnimportant as regular text, the extra
"Previewing"/"Updating" line that we show really stands out as being
superfluous. For example, we previously said:
Updating stack 'docker-images'
Performing changes:
...
This change eliminates that second line, so we just have:
Updating stack 'docker-images':
...
Recently, we eliminated bright black text, which IMHO makes the
"same" lines really stand out more than we want them to. This is
partly just due to the heavyweight nature of the "*" character,
which we precede every line with. This has the effect of making it
toughter to scan the update to see what's going to happen. The goal
of SpecUnimportant (bright black) was that we wanted to draw less
attention to certain elements of the CLI text -- and have them fade
into the background (apparently it was too successful at this ;-))
So, this change eliminates the "*" prefix for same operations
altogether. It reads better to my eyes and keeps the original intent.
- Attempting to read an archive with an unknown type now returns an
error
- Attempting to read a path archive that is neither a known archive
format or a directory now returns an error
Fixes#1529.
Fixes#1953.
Add two command-line options to the test framework, "-dirs" and
"list-dirs". The former accepts a regex that is used to select which
integration tests to run. The latter lists available integration tests
without running them.
* Protobuf changes
* Move management of root resource state to engine
This commit fixes a persistent side-by-side issue in the NodeJS SDK by
moving the management of root resource state to the engine. Doing so
adds two new endpoints to the Engine gRPC service: 1) GetRootResource
and 2) SetRootResource, which get and set the root resource
respectively.
* Rebase against master, regenerate proto
* Have backend.ListStacks return a new StackSummary interface
* Update filestake backend to use new type
* Update httpstate backend to use new type
* Update commands to use new type
* lint
* Address PR feedback
* Lint
In order to present a message with a link to a user's access tokens in
the service, we have to convert the API server's URL to the URL for
the console. We understand how to do this for servers that look like
our servers (i.e. their host name is `api.<whatever>`), but in general
we can't do this.
Rewrite the code such that we only print a message about how to find
your access tokens (and offer browser based login) only in cases where
we are able to construct a URL to the console.
Fixes#1930
* Revert "Don't show stack outputs when update fails (#1916)"
This reverts commit e3f89e82aa.
* Be more precise about printing outputs
This commit prints outputs only if they are known to be complete. This
avoids massive red diffs during previews and when component resources
fail to call registerResourceOutputs.
* CR: Clean up large boolean expression and comment
* CR: boolean compromise
* Retire pending deletions at start of plan
Instead of letting pending deletions pile up to be retired at the end of
a plan, this commit eagerly disposes of any pending deletions that were
pending at the end of the previous plan. This is a nice usability win
and also reclaims an invariant that at most one resource with a given
URN is live and at most one is pending deletion at any point in time.
* Rebase against master
* Fix a test issue arising from shared snapshots
* CR feedback
* plan -> replacement
* Use ephemeral statuses to communicate deletions
* Close cancellation source before closing events
The cancellation source logs cancellation messages to the engine event
channel, so we must first close the cancellation source before closing
the channel.
* CR: Fix race in shutdown of signal goroutine
* Don't show stack outputs when update fails
It is basically guaranteed that stack outputs are going to be
meaningless if a plan fails, so this commmit doesn't display them if an
error has occured.
* CR: expand on comment
We signal provider cancellation by hangning a goroutine off of the plan
executor's parent context. To ensure clean shutdown, this goroutine also
listens on a channel that closes once the plan has finished executing.
Unfortunately, we were closing this channel too early, and the close was
racing with the cancellation signal. These changes ensure that the
channel closes after the plan has fully completed.
Fixes#1906.
Fixespulumi/pulumi-kubernetes#185.
* Validate type tokens before using them
When registering or reading a resource, we take the type token given to
us from the language host and assume that it's valid, which resulted in
assertion failures in various places in the engine. This commit
validates the format of type tokens given to us from the language host
and issues an appropriate error if it's not valid.
Along the way, this commit also improves the way that fatal exceptions
are rendered in the Node language host.
* Pre-allocate an exception for ReadResource
* Fix integration test
* CR Feedback
This commit is a lower-impact change that fixes the bugs associated with
invalid types on component resources and only checks that a type is
valid on custom resources.
* CR Take 2: Fix up IsProviderType instead of fixing call sites
* Please gometalinter
It is possible for the inputs of a "same" resource to have changed even
if the contents of the input bags are different if the resource's
provider deems the physical change to be semantically irrelevant.
Previously `new` was operating under the assumption that it was always
going to be creating a new project/stack, and would always prompt for
these values. However, we want to be able to use `new` to pull down the
source for an existing stack. This change adds a `--stack` flag to `new`
that can be used to specify an existing stack. If the specified stack
already exists, we won't prompt for the project name/description, and
instead just use the existing stack's values. If `--stack` is specified,
but doesn't already exist, it will just use that as the stack name
(instead of prompting) when creating the stack. `new` also now handles
configuration like `up <url>`: if the stack is a preconfigured empty
stack (i.e. it was created/configured in the Pulumi Console via Pulumi
Button or New Project), we will use the existing stack's config without
prompting. Otherwise we will prompt for config, and just like `up
<url>`, we'll use the existing stack's config values as defaults when
prompting, or if the stack didn't exist, use the defaults from the
template.
Previously `up <url>`'s handling of the project name/description wasn't
correct: it would always automatically use the values from the template
without prompting. Now, just like `new`:
- When updating an existing stack, it will not prompt, and just use the
existing stack's values.
- When creating a new stack, it will prompt for the project
name/description, using the defaults from the template.
This PR consolidates some of the `new`/`up` implementation so it shares
code for this functionality. There's definitely opportunities for a lot
more code reuse, but that cleanup can happen down the line if/when we
have the cycles.
* Don't close eventChannel when panicking
The state of the system is completely unknown when panicking and in
general it's not safe to infer whether or not it is safe to close a
channel when in this tate.
* CR feedback
* Spelling
* Introduce Result type to engine
The Result type can be used to signal the failure of a computation due
to both internal and non-internal reasons. If a computation failed due
to an internal error, the Result type carries that error with it and
provides it when the 'Error' method on a Result is called. If a
computation failed gracefully, but wished to bail instead of continue a
doomed plan, the 'Error' method provides a value of null.
* CR feedback
We generally want examples and apps to be authored such that they are
clonable/deployable as-is without using new/up (and want to
encourage this). That means no longer using the ${PROJECT} and
${DESCRIPTION} replacement strings in Pulumi.yaml and other text files.
Instead, good default project names and descriptions should be specified
in Pulumi.yaml and elsewhere.
We'll use the specified values as defaults when prompting the user, and
then directly serialize/save the values to Pulumi.yaml when configuring
the user's project. This does mean that name in package.json (for nodejs
projects) won't be updated if it isn't using ${PROJECT}, but that's OK.
Our templates in the pulumi/templates repo will still use
${PROJECT}/${DESCRIPTION} for now, to continue to work well with v0.15
of the CLI. After that version is no longer in use, we can update the
templates to no longer use the replacement strings and delete the code
in the CLI that deals with it.
This change implements the same preview behavior we have for
cloud stacks, in pkg/backend/httpbe, for local stacks, in
pkg/backend/filebe. This mostly required just refactoring bits
and pieces so that we can share more of the code, although it
does still entail quite a bit of redundancy. In particular, the
apply functions for both backends are now so close to being
unified, but still require enough custom logic that it warrants
keeping them separate (for now...)
This simply refactors all the display logic out of the
pkg/backend/filestate package. This helps to gear us up to better unify
this logic between the filestate and httpstate backends.
Furthermore, this really ought to be in its own non-backend,
CLI-specific package, but I'm taking one step at a time here.
This change alters the login prompt slightly, so that it is more
obvious that alternative methods exist.
Before this change, we would say:
$ pulumi login
We need your Pulumi account to identify you.
Enter your access token from https://app.pulumi.com/account
or hit <ENTER> to log in using your browser :
After this change, we say this instead:
$ pulumi login
Manage your Pulumi stacks by logging in.
Run `pulumi login --help` for alternative login options.
Enter your access token from https://app.pulumi.com/account
or hit <ENTER> to log in using your browser :
Also updated the help text to advertise this a bit more prominently.
This renames the backend packages to more closely align with the
new direction for them. Namely, pkg/backend/cloud becomes
pkg/backend/httpstate and pkg/backend/local becomes
pkg/backend/filestate. This also helps to clarify that these are meant
to be around state management and so the upcoming refactoring required
to split out (e.g.) the display logic (amongst other things) will make
more sense, and we'll need better package names for those too.
As part of making the local backend more prominent, this changes a few
aspects of how you use it:
* Simplify how you log into a specific cloud; rather than
`pulumi login --cloud-url <url>`, just say `pulumi login <url>`.
* Use a proper URL scheme to denote local backend usage. We have chosen
file://, since the REST API backend is of course always https://.
This means that you can say `pulumi login file://~` to use the local
backend, with state files stored in your home directory. Similarly,
we support `pulumi login file://.` for the current directory.
* Add a --local flag to the login command, to make local logins a
bit easier in the common case of using your home directory. Just say
`pulumi login --local` and it is sugar for `pulumi login file://~`.
* Print the URL for the backend after logging in; for the cloud,
this is just the user's stacks page, and for the local backend,
this is the path to the user's stacks directory on disk.
* Tidy up the documentation for login a bit to be clearer about this.
This is part of pulumi/pulumi#1818.
The existing message put the URL to visit and some explanation text on
the same line, which makes it a little harder to copy only the URL
into a browser. If this extra text ends up being copied as well as the
URL it can lead to failures later, when we try to decode the query
string as part of the OAuth flow.
It's easy enough to fix by just putting the URL on its own line, split
off from the text itself.
Fixes#1832
This changes two things:
1) Per PR feedback, change the text to "N changes found during refresh."
2) Colorize the parenthetical "No resources will be modified" message;
it looked a little odd being plain colored.
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
Previously, we only supported config keys that included a ':' delimiter
in config keys specified in the template manifest and in `-c` flags to
`new` and `up`. This prevented the use of project keys in the template
manifest and made it more difficult to pass such keys with `-c`,
effectively preventing the use of `new pulumi.Config()` in project code.
This change fixes this by allowing config keys that don't have a
delimiter in the template manifest and `-c` flags. In such cases, the
project name is automatically prepended behind the scenes, the same as
what `pulumi config set` does.
The pattern we are using here is generally prone to error, so I hope we find a way to move away from this more generally - but for now we need to be able to configure more of these in places we are using `With`.
Also remove some `+""` that are tripping up the linter for me locally.
- Create all refresh steps before issuing any. This is important as the
state update loop expects all steps to exist.
- Check for cancellation later in the refresher.
This also fixes races in the SnapshotManager and the test journal that
could cause panics during cancellation.
The wording for refresh doesn't accurately convey that the operations
aren't actually mutating your resources, but instead are simply changing
your checkpoint state. This change (hopefully) helps in two ways:
First, put text just before the prompt:
Do you want to perform this refresh?
No resources will be modified as part of this refresh; just your stack's state will be.
Second, alter the summary ever-so-slightly, from:
info: 2 changes performed:
~ 2 resources updated
3 resources unchanged
to:
info: 2 changes refreshed:
~ 2 resources updated
3 resources unchanged
This reads just slightly better, and removes any sense of panic I might
have otherwise had that my refresh just did something wrong.
As I was in here, since I had to pass UpdateKind information to new
places, I cleaned up the situation where we had three mostly-similar
enums (but which actually diverged) and several areas where we were
using untyped strings for this same information. Now there's just one.
This fixespulumi/pulumi#1551.
This commit will greatly improve the experience of dealing with partial
failures by simply re-trying to initialize the relevant resources on
every subsequent `pulumi up`, instead of printing a list of reasons the
resource had previously failed to initialize.
As motivation, consider our behavior in the following common, painful
scenario:
* The user creates a `Service` and a `Deployment`.
* The `Pod`s in the `Deployment` fail to become live. This causes the
`Service` to fail, since it does not target any live `Pod`s.
* The user fixes the `Deployment`. A run of `pulumi up` sees the
`Pod`s successfully initialize.
* Users will expect that the `Service` is now in a state of success,
as the `Pod`s it targets are alive. But, because we don't update the
`Service` by default, it perpetually exists in a state of error.
* The user is now required to change some trivial feature of the
`Service` just to trigger an update, so that we can see it succeed.
There are many situations like this. Another very common one is waiting
for test `Pod`s that are meant to successfully complete when some object
becomes live.
By triggering an empty update step for all resources that have any
initialization errors, we avoid all problems like this.
This commit will implement this empty-update semantics for partial
failures, as well as fix the display UX to correctly render the diff in
these cases.
Often when the test fails, you'll want to use the web console to
investigate the stack, looking at detailed log messages, for
example. But since we always delete the stack today, you often can't.
Stop running `pulumi stack rm` when the test fails, however do run a
`pulumi destroy` so we at least clean up any resources the stack may
have allocated during testing.
Fixes#1722
Replace the Source-based implementation of refresh with a phase that
runs as the first part of plan execution and rewrites the snapshot in-memory.
In order to fit neatly within the existing framework for resource operations,
these changes introduce a new kind of step, RefreshStep, to represent
refreshes. RefreshSteps operate similar to ReadSteps but do not imply that
the resource being read is not managed by Pulumi.
In addition to the refresh reimplementation, these changes incorporate those
from #1394 to run refresh in the integration test framework.
Fixes#1598.
Fixespulumi/pulumi-terraform#165.
Contributes to #1449.
The fact that we include the `:config:` portion of a configuration
name when sending to the service is an artifact of history. It was
needed back when we needed to support deploying using a new CLI
against a PPC that had older packages embeded in it.
We don't have that sort of problem anymore, so we can stop sending
this data.
* Show a better error message when decrypting fails
It is most often the case that failing to decrypt a secret implies that
the secret was transferred from one stack to another via copying the
configuration. This commit introduces a better error message for this
case and instructs users to explicitly re-encrypt their encrypted keys
in the context of the new stack.
* Spelling
* CR: Grammar fixes
These changes simplify a couple aspects of plan execution in the hopes of
clarifying some responsibilities and preparing the code for changes to the
implementation of refresh.
1. All aspects of plan execution are now managed by the plan executor,
which is no longer exported. Instead, it is abstracted behind
`Plan.Execute`.
2. The plan executor's error-handling and reporting have been unified
and simplified somewhat.
* Log errors coming from the language host
Similar to pulumi/pulumi#1762, fixespulumi/pulumi#1775. The language
host can fail without issuing any diagnostics and it is very unclear
what happens if the engine does not log the error.
* CR feedback
A checkpoint write is unnecessary if it does not change the semantics of
the data currently stored in the checkpoint. We currently perform
unnecessary checkpoint writes in two cases:
- Same steps where no aspect of the resource's state has changed
- Replace steps, which exist solely for display purposes
The former case is particularly bothersome, as it is rather common to
run updates--especially in CI--that consist largely/entirely of these
same steps.
These changes eliminate the checkpoint writes we perform in these two
cases. Some care is needed to ensure that we continue to write the
checkpoint in the case of same steps that do represent meaningful
changes (e.g. changes to a resource's output properties or
dependencies).
Fixes#1769.
The glog package force the use of golang's underyling flag package,
which Cobra does not use. To work around this, we had a complicated
dance around defining flags in multiple places, calling flag.Parse
explicitly and then stomping values in the flag package with values we
got from Cobra.
Because we ended up parsing parts of the command line twice, each with
a different set of semantics, we ended up with bad UX in some
cases. For example:
`$ pulumi -v=10 --logflow update`
Would fail with an error message that looked nothing like normal CLI
errors, where as:
`$ pulumi -v=10 update --logflow`
Would behave as you expect. To address this, we now do two things:
- We never call flag.Parse() anymore. Wacking the flags with values we
got from Cobra is sufficent for what we care about.
- We use a forked copy of glog which does not complain when
flag.Parse() is not called before logging.
Fixes#301Fixes#710Fixes#968
1. 'readID' was never assigned to and was always the default value,
leading the refresh source to believe a resource was deleted
2. The refresh source could hang when a resource is deleted.
The plan executor assumed that the step generator was responsible for
logging its own diagnostics, which it sort-of is but also doesn't log a
majority of the diagnositcs that come out of it. This commit logs all
errors coming out of step generation so that we don't unintentionally
drop errors.
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
* Initial support for passing URLs to `new` and `up`
This PR adds initial support for `pulumi new` using Git under the covers
to manage Pulumi templates, providing the same experience as before.
You can now also optionally pass a URL to a Git repository, e.g.
`pulumi new [<url>]`, including subdirectories within the repository,
and arbitrary branches, tags, or commits.
The following commands result in the same behavior from the user's
perspective:
- `pulumi new javascript`
- `pulumi new https://github.com/pulumi/templates/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates/javascript`
To specify an arbitrary branch, tag, or commit:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates/javascript`
Branches and tags can include '/' separators, and `pulumi` will still
find the right subdirectory.
URLs to Gists are also supported, e.g.:
`pulumi new https://gist.github.com/justinvp/6673959ceb9d2ac5a14c6d536cb871a6`
If the specified subdirectory in the repository does not contain a
`Pulumi.yaml`, it will look for subdirectories within containing
`Pulumi.yaml` files, and prompt the user to choose a template, along the
lines of how `pulumi new` behaves when no template is specified.
The following commands result in the CLI prompting to choose a template:
- `pulumi new`
- `pulumi new https://github.com/pulumi/templates/templates`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates`
Of course, arbitrary branches, tags, or commits can be specified as well:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates`
This PR also includes initial support for passing URLs to `pulumi up`,
providing a streamlined way to deploy installable cloud applications
with Pulumi, without having to manage source code locally before doing
a deployment.
For example, `pulumi up https://github.com/justinvp/aws` can be used to
deploy a sample AWS app. The stack can be updated with different
versions, e.g.
`pulumi up https://github.com/justinvp/aws/tree/v2 -s <stack-to-update>`
Config values can optionally be passed via command line flags, e.g.
`pulumi up https://github.com/justinvp/aws -c aws:region=us-west-2 -c foo:bar=blah`
Gists can also be used, e.g.
`pulumi up https://gist.github.com/justinvp/62fde0463f243fcb49f5a7222e51bc76`
* Fix panic when hitting ^C from "choose template" prompt
* Add description to templates
When running `pulumi new` without specifying a template, include the template description along with the name in the "choose template" display.
```
$ pulumi new
Please choose a template:
aws-go A minimal AWS Go program
aws-javascript A minimal AWS JavaScript program
aws-python A minimal AWS Python program
aws-typescript A minimal AWS TypeScript program
> go A minimal Go program
hello-aws-javascript A simple AWS serverless JavaScript program
javascript A minimal JavaScript program
python A minimal Python program
typescript A minimal TypeScript program
```
* React to changes to the pulumi/templates repo.
We restructured the `pulumi/templates` repo to have all the templates in the root instead of in a `templates` subdirectory, so make the change here to no longer look for templates in `templates`.
This also fixes an issue around using `Depth: 1` that I found while testing this. When a named template is used, we attempt to clone or pull from the `pulumi/templates` repo to `~/.pulumi/templates`. Having it go in this well-known directory allows us to maintain previous behavior around allowing offline use of templates. If we use `Depth: 1` for the initial clone, it will fail when attempting to pull when there are updates to the remote repository. Unfortunately, there's no built-in `--unshallow` support in `go-git` and setting a larger `Depth` doesn't appear to help. There may be a workaround, but for now, if we're cloning the pulumi templates directory to `~/.pulumi/templates`, we won't use `Depth: 1`. For template URLs, we will continue to use `Depth: 1` as we clone those to a temp directory (which gets deleted) that we'll never try to update.
* List available templates in help text
* Address PR Feedback
* Don't show "Installing dependencies" message for `up`
* Fix secrets handling
When prompting for config, if the existing stack value is a secret, keep it a secret and mask the prompt. If the template says it should be secret, make it a secret.
* Fix ${PROJECT} and ${DESCRIPTION} handling for `up`
Templates used with `up` should already have a filled-in project name and description, but if it's a `new`-style template, that has `${PROJECT}` and/or `${DESCRIPTION}`, be helpful and just replace these with better values.
* Fix stack handling
Add a bool `setCurrent` param to `requireStack` to control whether the current stack should be saved in workspace settings. For the `up <url>` case, we don't want to save. Also, split the `up` code into two separate functions: one for the `up <url>` case and another for the normal `up` case where you have workspace in your current directory. While we may be able to combine them back into a single function, right now it's a bit cleaner being separate, even with some small amount of duplication.
* Fix panic due to nil crypter
Lazily get the crypter only if needed inside `promptForConfig`.
* Embellish comment
* Harden isPreconfiguredEmptyStack check
Fix the code to check to make sure the URL specified on the command line matches the URL stored in the `pulumi:template` config value, and that the rest of the config from the stack satisfies the config requirements of the template.
Some time ago, we introduced the concept of the initialization error to
Pulumi (i.e., an error where the resource was successfully created but
failed to fully initialize). This was originally implemented in `Create`
and `Update` methods of the resource provider interface; when we
detected an initialization failure, we'd pack the live version of the
object into the error, and return that to the engine.
Omitted from this initial implementation was a similar semantics for
`Read`. There are many implications of this, but one of them is that a
`pulumi refresh` will erase any initialization errors that had
previously been observed, even if the initialization errors still exist
in the resource.
This commit will introduce the initialization error semantics to `Read`,
fixing this issue.
The belief is that this hides some complexity that we shouldn't be
exposing in the default case.
In order to filter these events from both the diff/progress display
and the resource change summary, we perform this filtering in
`pkg/engine`.
Fixes#1733.
* Emit reads for external resources when refreshing
Fixespulumi/pulumi#1744. This commit educates the refresh source about
external resources. If a refresh source encounters a resource with the
External bit set, it'll send a Read event to the engine and the engine
will process it accordingly.
* CR: save last event channel instead of last event, style fixes
Our existing logic was ported from an existing NPM package. However,
our port was incorrect, which is unsurprising given the style of the
source code.
I have rewritten the logic to be more idomatic golang, while keeping
the overall detection logic intact.
Our detection is still based on the presence of environment
variables.
Fixes: #1646
* Serialize SourceEvents coming from the refresh source
The engine requires that a source event coming from a source be "ready
to execute" at the moment that it is sent to the engine. Since the
refresh source sent all goal states eagerly through its source iterator,
the engine assumed that it was legal to execute them all in parallel and
did so. This is a problem for the snapshot, since the snapshot expects
to be in an order that is a legal topological ordering of the dependency
DAG.
This PR fixes the issue by sending refresh source events one-at-a-time
through the refresh source iterator, only unblocking to send the next
step as soon as the previous step completes.
* Fix deadlock in refresh test
* Fix an issue where the engine "completed" steps too early
By signalling that a step is done before committing the step's results
to the snapshot, the engine was left with a race where dependent
resources could find themselves completely executed and committed before
a resource that they depend on has been committed.
Fixespulumi/pulumi#1726
* Fix an issue with Replace steps at the end of a plan
If the last step that was executed successfully was a Replace, we could
end up in a situation where we unintentionally left the snapshot
invalid.
* Add a test
* CR: pass context.Context as first parameter to Iterate
* CR: null->nil
This is consistent with the behavior prior to the introduction of Read
steps. In order to avoid a breaking change we must do this check in the
engine itself, which causes a bit of a layering violation: because IDs
are marshaled as raw strings rather than PropertyValues, the engine must
check against the marshaled form of an unknown directly (i.e.
`plugin.UnknownStringValue`).
When calculating deletes, we will only issue a single delete step for a
particular URN. This is incorrect in the presence of pending deletes
that share URNs with a live resource if the pending deletes follow the
live resource in the checkpoint: instead of issuing a delete for
every resource with a particular URN, we will only issue deletes for
the pending deletes.
Before first-class providers, this was mostly benigin: any remaining
resources could be deleted by re-running the destroy. With the
first-class provider changes, however, the provider for the undeleted
resources will be deleted, leaving the checkpoint in an invalid state.
These changes fix this issue by allowing the step generator to issue
multiple deletes for a single URN and add a test for this scenario.
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
* Execute chains of steps in parallel
Fixespulumi/pulumi#1624. Since register resource steps are known to be
ready to execute the moment the engine sees them, we can effectively
parallelize all incoming step chains. This commit adds the machinery
necessary to do so - namely a step executor and a plan executor.
* Remove dead code
* CR: use atomic.Value to be explicit about what values are atomically loaded and stored
* CR: Initialize atomics to 'false'
* Add locks around data structures in event callbacks
* CR: Add DegreeOfParallelism method on Options and add comment on select in Execute
* CR: Use context.Context for cancellation instead of cancel.Source
* CR: improve cancellation
* Rebase against master: execute read steps in parallel
* Please gometalinter
* CR: Inline a few methods in stepExecutor
* CR: Feedback and bug fixes
1. Simplify step_executor.go by 'bubbling' up errors as far as possible
and reporting diagnostics and cancellation in one place
2. Fix a bug where the CLI claimed that a plan was cancelled even if it
wasn't (it just has an error)
* Comments
* CR: Add comment around problematic select, move workers.Add outside of goroutine, return instead of break
We retain a few tests on the RunBuild plan, with `typescript` set to
false in the runtime options, but for the general case, we remove the
build steps and custom entry points for our programs.
This change lets us set runtime specific options in Pulumi.yaml, which
will flow as arguments to the language hosts. We then teach the nodejs
host that when the `typescript` is set to `true` that it should load
ts-node before calling into user code. This allows using typescript
natively without an explicit compile step outside of Pulumi.
This works even when a tsconfig.json file is not present in the
application and should provide a nicer inner loop for folks writing
typescript (I'm pretty sure everyone has run into the "but I fixed
that bug! Why isn't it getting picked up? Oh, I forgot to run tsc"
problem.
Fixes#958
The `range` operator returns array indexes that are 0-indexed. This is helpful when working with the computer code, but since we print the error to humans it looks really strange.
e.g.
```
error: 7 errors occurred:
0) resource 'urn:pulumi:timer-ex::timer::pulumi:pulumi:Stack::timer-timer-ex' is from a different stack (timer-ex != timer-import)
1) resource 'urn:pulumi:timer-ex::timer:☁️timer:Timer::test-timer' is from a different stack (timer-ex != timer-import)
...
```
* Protobuf changes to record dependencies for read resources
* Add a number of tests for read resources, especially around replacement
* Place read resources in the snapshot with "external" bit set
Fixespulumi/pulumi#1521. This commit introduces two new step ops: Read
and ReadReplacement. The engine generates Read and ReadReplacement steps
when servicing ReadResource RPC calls from the language host.
* Fix an omission of OpReadReplace from the step list
* Rebase against master
* Transition to use V2 Resources by default
* Add a semantic "relinquish" operation to the engine
If the engine observes that a resource is read and also that the
resource exists in the snapshot as a non-external resource, it will not
delete the resource if the IDs of the old and new resources match.
* Typo fix
* CR: add missing comments, DeserializeDeployment -> DeserializeDeploymentV2, ID check