These changes simplify a couple aspects of plan execution in the hopes of
clarifying some responsibilities and preparing the code for changes to the
implementation of refresh.
1. All aspects of plan execution are now managed by the plan executor,
which is no longer exported. Instead, it is abstracted behind
`Plan.Execute`.
2. The plan executor's error-handling and reporting have been unified
and simplified somewhat.
* Log errors coming from the language host
Similar to pulumi/pulumi#1762, fixespulumi/pulumi#1775. The language
host can fail without issuing any diagnostics and it is very unclear
what happens if the engine does not log the error.
* CR feedback
The plugin host can ask the language host to provide a list of
resource plugins that it thinks will be nessecary for use at
deployment time, so they can be eagerly loaded.
In NodeJS (the only language host that implements this RPC) This works
by walking the directory tree rooted at the CWD of the project,
looking for package.json files, parsing them and seeing it they have
some marker property set. If they do, we add information about them
which we return at the end of our walk.
If there is *any* error, the entire operation fails. We've seen a
bunch of cases where this happens:
- Broken symlinks written by some editors as part of autosave.
- Access denied errors when part of the tree is unwalkable (Eric ran
into this on Windows when he had a Pulumi program at the root of his
file system.
- Recusive symlinks leading to errors when trying to walk down the
infinite chain. (See #1634 for one such example).
The very frustrating thing about this is that when you hit an error
its not clear what is going on and fixing it can be non-trivial. Even
worse, in the normal case, all of these plugins are already installed
and could be loaded by the host (in the common case, plugins are
installed as a post install step when you run `npm install`) so if we
simply didn't do this check at all, things would work great.
This change does two things:
1. It does not stop at the first error we hit when discovering
plugins, instead we just record the error and continue.
2. Does not fail the overall operation if there was an error. Instead,
we return to the host what we have, which may be an incomplete view
of the world. We glog the errors we did discover for diagnostics if
we ever need them.
I believe that long term most of this code gets deleted anyway. I
expect we will move to a model long term where the engine faults in
the plugin (downloading it if needed) when a request for the plugin
arrives. But for now, we shouldn't block normal operations just
because we couldn't answer a question with full fidelity.
Fixes#1478
A checkpoint write is unnecessary if it does not change the semantics of
the data currently stored in the checkpoint. We currently perform
unnecessary checkpoint writes in two cases:
- Same steps where no aspect of the resource's state has changed
- Replace steps, which exist solely for display purposes
The former case is particularly bothersome, as it is rather common to
run updates--especially in CI--that consist largely/entirely of these
same steps.
These changes eliminate the checkpoint writes we perform in these two
cases. Some care is needed to ensure that we continue to write the
checkpoint in the case of same steps that do represent meaningful
changes (e.g. changes to a resource's output properties or
dependencies).
Fixes#1769.
The glog package force the use of golang's underyling flag package,
which Cobra does not use. To work around this, we had a complicated
dance around defining flags in multiple places, calling flag.Parse
explicitly and then stomping values in the flag package with values we
got from Cobra.
Because we ended up parsing parts of the command line twice, each with
a different set of semantics, we ended up with bad UX in some
cases. For example:
`$ pulumi -v=10 --logflow update`
Would fail with an error message that looked nothing like normal CLI
errors, where as:
`$ pulumi -v=10 update --logflow`
Would behave as you expect. To address this, we now do two things:
- We never call flag.Parse() anymore. Wacking the flags with values we
got from Cobra is sufficent for what we care about.
- We use a forked copy of glog which does not complain when
flag.Parse() is not called before logging.
Fixes#301Fixes#710Fixes#968
1. 'readID' was never assigned to and was always the default value,
leading the refresh source to believe a resource was deleted
2. The refresh source could hang when a resource is deleted.
The plan executor assumed that the step generator was responsible for
logging its own diagnostics, which it sort-of is but also doesn't log a
majority of the diagnositcs that come out of it. This commit logs all
errors coming out of step generation so that we don't unintentionally
drop errors.
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
* Initial support for passing URLs to `new` and `up`
This PR adds initial support for `pulumi new` using Git under the covers
to manage Pulumi templates, providing the same experience as before.
You can now also optionally pass a URL to a Git repository, e.g.
`pulumi new [<url>]`, including subdirectories within the repository,
and arbitrary branches, tags, or commits.
The following commands result in the same behavior from the user's
perspective:
- `pulumi new javascript`
- `pulumi new https://github.com/pulumi/templates/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates/javascript`
To specify an arbitrary branch, tag, or commit:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates/javascript`
Branches and tags can include '/' separators, and `pulumi` will still
find the right subdirectory.
URLs to Gists are also supported, e.g.:
`pulumi new https://gist.github.com/justinvp/6673959ceb9d2ac5a14c6d536cb871a6`
If the specified subdirectory in the repository does not contain a
`Pulumi.yaml`, it will look for subdirectories within containing
`Pulumi.yaml` files, and prompt the user to choose a template, along the
lines of how `pulumi new` behaves when no template is specified.
The following commands result in the CLI prompting to choose a template:
- `pulumi new`
- `pulumi new https://github.com/pulumi/templates/templates`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates`
Of course, arbitrary branches, tags, or commits can be specified as well:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates`
This PR also includes initial support for passing URLs to `pulumi up`,
providing a streamlined way to deploy installable cloud applications
with Pulumi, without having to manage source code locally before doing
a deployment.
For example, `pulumi up https://github.com/justinvp/aws` can be used to
deploy a sample AWS app. The stack can be updated with different
versions, e.g.
`pulumi up https://github.com/justinvp/aws/tree/v2 -s <stack-to-update>`
Config values can optionally be passed via command line flags, e.g.
`pulumi up https://github.com/justinvp/aws -c aws:region=us-west-2 -c foo:bar=blah`
Gists can also be used, e.g.
`pulumi up https://gist.github.com/justinvp/62fde0463f243fcb49f5a7222e51bc76`
* Fix panic when hitting ^C from "choose template" prompt
* Add description to templates
When running `pulumi new` without specifying a template, include the template description along with the name in the "choose template" display.
```
$ pulumi new
Please choose a template:
aws-go A minimal AWS Go program
aws-javascript A minimal AWS JavaScript program
aws-python A minimal AWS Python program
aws-typescript A minimal AWS TypeScript program
> go A minimal Go program
hello-aws-javascript A simple AWS serverless JavaScript program
javascript A minimal JavaScript program
python A minimal Python program
typescript A minimal TypeScript program
```
* React to changes to the pulumi/templates repo.
We restructured the `pulumi/templates` repo to have all the templates in the root instead of in a `templates` subdirectory, so make the change here to no longer look for templates in `templates`.
This also fixes an issue around using `Depth: 1` that I found while testing this. When a named template is used, we attempt to clone or pull from the `pulumi/templates` repo to `~/.pulumi/templates`. Having it go in this well-known directory allows us to maintain previous behavior around allowing offline use of templates. If we use `Depth: 1` for the initial clone, it will fail when attempting to pull when there are updates to the remote repository. Unfortunately, there's no built-in `--unshallow` support in `go-git` and setting a larger `Depth` doesn't appear to help. There may be a workaround, but for now, if we're cloning the pulumi templates directory to `~/.pulumi/templates`, we won't use `Depth: 1`. For template URLs, we will continue to use `Depth: 1` as we clone those to a temp directory (which gets deleted) that we'll never try to update.
* List available templates in help text
* Address PR Feedback
* Don't show "Installing dependencies" message for `up`
* Fix secrets handling
When prompting for config, if the existing stack value is a secret, keep it a secret and mask the prompt. If the template says it should be secret, make it a secret.
* Fix ${PROJECT} and ${DESCRIPTION} handling for `up`
Templates used with `up` should already have a filled-in project name and description, but if it's a `new`-style template, that has `${PROJECT}` and/or `${DESCRIPTION}`, be helpful and just replace these with better values.
* Fix stack handling
Add a bool `setCurrent` param to `requireStack` to control whether the current stack should be saved in workspace settings. For the `up <url>` case, we don't want to save. Also, split the `up` code into two separate functions: one for the `up <url>` case and another for the normal `up` case where you have workspace in your current directory. While we may be able to combine them back into a single function, right now it's a bit cleaner being separate, even with some small amount of duplication.
* Fix panic due to nil crypter
Lazily get the crypter only if needed inside `promptForConfig`.
* Embellish comment
* Harden isPreconfiguredEmptyStack check
Fix the code to check to make sure the URL specified on the command line matches the URL stored in the `pulumi:template` config value, and that the rest of the config from the stack satisfies the config requirements of the template.
If a resource's options bag does not specify `protect` or `provider`,
pull a default value from the resource's parent.
In order to allow a parent resource to specify providers for multiple
resource types, component resources now accept an optional map from
package name to provider instance. When a custom resource needs a
default provider from its parent, it checks its parent provider bag for
an entry under its package. If a component resource does not have a
provider bag, its pulls a default from its parent.
These changes also add a `parent` field to `InvokeOptions` s.t. calls to
invoke can use the same behavior as resource creation w.r.t. providers.
Fixes#1735, #1736.
Some time ago, we introduced the concept of the initialization error to
Pulumi (i.e., an error where the resource was successfully created but
failed to fully initialize). This was originally implemented in `Create`
and `Update` methods of the resource provider interface; when we
detected an initialization failure, we'd pack the live version of the
object into the error, and return that to the engine.
Omitted from this initial implementation was a similar semantics for
`Read`. There are many implications of this, but one of them is that a
`pulumi refresh` will erase any initialization errors that had
previously been observed, even if the initialization errors still exist
in the resource.
This commit will introduce the initialization error semantics to `Read`,
fixing this issue.
Many non-secrets are actually pretty high entropy, at least according
to `zxcvbn`. For example: "Hello, Pulumi Timers!" would actually cause
us to say: "this looks like a secret", much in the same way that
"correct horse battery staple" is high entropy according to that
package.
In addition to considering the entropy of the value, gosec (the linter
we copied this logic from) also considers the name of the value that
is being assigned to.
In that spirit, let's only do this check when the config key name
actually looks like it is something we'd want to warn the user about.
We use the same regular expression as `gosec`.
Fixes#1732
The belief is that this hides some complexity that we shouldn't be
exposing in the default case.
In order to filter these events from both the diff/progress display
and the resource change summary, we perform this filtering in
`pkg/engine`.
Fixes#1733.
* Emit reads for external resources when refreshing
Fixespulumi/pulumi#1744. This commit educates the refresh source about
external resources. If a refresh source encounters a resource with the
External bit set, it'll send a Read event to the engine and the engine
will process it accordingly.
* CR: save last event channel instead of last event, style fixes
Our existing logic was ported from an existing NPM package. However,
our port was incorrect, which is unsurprising given the style of the
source code.
I have rewritten the logic to be more idomatic golang, while keeping
the overall detection logic intact.
Our detection is still based on the presence of environment
variables.
Fixes: #1646
* Serialize SourceEvents coming from the refresh source
The engine requires that a source event coming from a source be "ready
to execute" at the moment that it is sent to the engine. Since the
refresh source sent all goal states eagerly through its source iterator,
the engine assumed that it was legal to execute them all in parallel and
did so. This is a problem for the snapshot, since the snapshot expects
to be in an order that is a legal topological ordering of the dependency
DAG.
This PR fixes the issue by sending refresh source events one-at-a-time
through the refresh source iterator, only unblocking to send the next
step as soon as the previous step completes.
* Fix deadlock in refresh test
* Fix an issue where the engine "completed" steps too early
By signalling that a step is done before committing the step's results
to the snapshot, the engine was left with a race where dependent
resources could find themselves completely executed and committed before
a resource that they depend on has been committed.
Fixespulumi/pulumi#1726
* Fix an issue with Replace steps at the end of a plan
If the last step that was executed successfully was a Replace, we could
end up in a situation where we unintentionally left the snapshot
invalid.
* Add a test
* CR: pass context.Context as first parameter to Iterate
* CR: null->nil
* Added dist target for make, will help with Homebrew
* Try to install go dependencies before building
* Make sure dep ensure is called before trying to build SDKs
* Removed dep ensure from dist initial step
This is consistent with the behavior prior to the introduction of Read
steps. In order to avoid a breaking change we must do this check in the
engine itself, which causes a bit of a layering violation: because IDs
are marshaled as raw strings rather than PropertyValues, the engine must
check against the marshaled form of an unknown directly (i.e.
`plugin.UnknownStringValue`).
In #1341 we promoted a class of errors in fetching git metadata from
glog messages to warnings printed by the CLI. On the asumption that
when we got warnings here they would be actionable.
The major impact here is that when you are working in a repository
which does not have a remote set to GitHub (common if you have just
`git init`'d a repository for a new project) or you don't call your
remote `origin` or you use some other code provider, we end up
printing a warning during every update.
This change does two things:
- Restructure the way we detect metadata to attempt to make progress
when it can. We bias towards returning some metadata even when we
can't determine the complete set of metadata.
- Use a multierror to track all the underlying failures from our
metadata probing and move it back to a glog message.
Overall, this feels like the right balance to me. We are retaining the
rich diagnostics information for when things go wrong, but we aren't
warning about common cases.
We could, of course, try to tighten our huristics (e.g. don't warn if
we can't find a GitHub remote but do warn if we can't compute if the
worktree is dirty) but it feels like that will be a game of
whack-a-mole over time and when warnings do fire its unlikely they
will be actionable.
Fixes#1443