Because of the change to include a stack's project as part of its
identity in the service, the names passed to StackReference now
require the project name as well.
Improve the error message when they do not include them.
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
The service also does this filtering on requests, because we'll need
to support older clients, but it would be nice if the CLI itself also
cleaned things up.
This change starts to use a stack's project name as part of it's
identity when talking to the cloud backend, which the Pulumi Service
now supports.
When displaying or parsing stack names for the cloud backend, we now
support the following schemes:
`<stack-name>`
`<owner-name>/<stack-name>`
`<owner-name>/<project-name>/<stack-name>`
When the owner is not specificed, we assume the currently logged in
user (as we did before). When the project name is not specificed, we
use the current project (and fail if we can't find a `Pulumi.yaml`)
Fixes#2039
We continue to do so when `--debug` has been passed, similar to how
these events are elided from the local display when you are not in a
debug context.
We're changing /account to redirect to /account/profile instead of
/account/tokens as the user profile settings are a more natural place
to land when going into account settings.
This commit changes the CLI to link
directly to /account/tokens, avoiding having to click on
"Access Tokens" to go to the tokens page to get an access token when
coming from the URL outputted by the CLI.
When returning immediately from the loop, we are closing the `done` channel early. This signals that we have finished processing every engine event, however that isn't true. Since some events may still be in-flight in `recordEngineEvent`. (This could potentially lead to a race condition in the `diag.Sink` passed to the API client used to record the call.)
The event rendering goroutine in the remote backend was not properly
synchronizing with the goroutine that created it, and could continue
executing after its creator finished. I believe that this is the root
cause of #1850.
This option allows the user to override the file used to fetch and store
configuration information for a stack. It is available for the config,
destroy, logs, preview, refresh, and up commands.
Note that this option is not persistent: if it is not specified, the
stack's default configuration will be used. If an alternate config file
is used exclusively for a stack, it must be specified to all commands
that interact with that stack.
This option can be used to share plaintext configuration across multiple
stacks. It cannot be used to share secret configuration, as secrets are
associated with a particular stack and cannot be decryptex by other
stacks.
These changes add a new resource to the Pulumi SDK,
`pulumi.StackReference`, that represents a reference to another stack.
This resource has an output property, `outputs`, that contains the
complete set of outputs for the referenced stack. The Pulumi account
performing the deployment that creates a `StackReference` must have
access to the referenced stack or the call will fail.
This resource is implemented by a builtin provider managed by the engine.
This provider will be used for any custom resources and invokes inside
the `pulumi:pulumi` module. Currently this provider supports only the
`pulumi:pulumi:StackReference` resource.
Fixes#109.
We run the same suite of changes that we did on gometalinter. This
ended up catching a few new issues, some of which were addressed and
some of which were baselined.
In preparation for some workspace restructuring, I decided to scratch a
few itches of my own in the code:
* Change project's RuntimeInfo field to just Runtime, to match the
serialized name in JSON/YAML.
* Eliminate the no-longer-used Context and NoDefaultIgnores fields on
project, and all of the associated legacy PPC-related code.
* Eliminate the no-longer-used IgnoreFile constant.
* Remove a bunch of "// nolint: lll" annotations, and simply format
the structures with comments on dedicated lines, to avoid overly
lengthy lines and lint suppressions.
* Mark Dependencies and InitErrors as `omitempty` in the JSON
serialization directives for CheckpointV2 files. This was done for
the YAML directives, but (presumably accidentally) omitted for JSON.
In the past, we had a mode where the CLI would upload the Pulumi
program, as well as its contents and do the execution remotely.
We've since stopped supporting that, but all the supporting code has
been left in the CLI.
This change removes the code we had to support the above case,
including the `pulumi archive` command, which was a debugging tool to
generate the archive we would have uploaded (which was helpful in the
past to understand why behavior differed between local execution and
remote execution.)
If you took the time to type out `--skip-preview`, we should have high
confidence that you don't want a preview and you're okay with the
operation just happening without a prompt.
Fixes#1448
* Remove TODO for issue since fixed in PPCs.
* Update issue reference to source
* Update comment wording
* Remove --ppc arg of stack init
* Remove PPC references in int. testing fx
* Remove vestigial PPC API types
* Introduce new metadata keys `vcs.repo`, `vcs.kind` and `vcs.owner` to keep the keys generic for any vcs. Expanded the git SSH regex to account for bitbucket's .org domain.
* Introduce new stack tags keys with the same theme of detecting the vcs.
* Enable gzip compression on the wire
This change allows the Pulumi API client to gzip requests sent to the
Pulumi service if requested using the 'GzipCompress' http option.
This change also sets the Accept-Encoding: gzip header for all requests
originating from the CLI, indicating to the service that it is free to
gzip responses. The 'readBody' function is used in the API client to
read a response's body, regardless of how it is encoded.
Finally, this change sets GzipCompress: true on the
'PatchUpdateCheckpoint' API call, since JSON payloads in that call tend
to be large and it has become a performance bottleneck.
* spelling
* CR feedback:
1. Clarify and edit comments
2. Close the gzip.Reader when reading bodies
3. Log the payload size when logging compression ratios
Last-minute events coming through the engine could cause the goroutine
iterating over engineEvents to write to displayEvents after it has
already been closed by the main goroutine.
This adds an option, --suppress-outputs, to many commands, to
avoid printing stack outputs for stacks that might contain sensitive
information in their outputs (like key material, and whatnot).
Fixespulumi/pulumi#2028.
There is a seldom-used capability in our CLI, the ability to pass
-m to specify an update message, which we will then show prominently.
At the same time, we already scrape some interesting information from
the Git repo from which an update is performed, like the SHA hash,
committer, and author information. We explicitly didn't want to scrape
the entire message just in case someone put sensitive info inside of it.
It seems safe -- indeed, appealing -- to use just the title portion
as the default update message when no other has been provided (the
majority case). We'll work on displaying it in a better way, but this
strengthens our GitOps/CI/CD story.
Fixespulumi/pulumi#2008.
It's not good practice to dirty up the stdout stream with messages
like this. In fact, I questioned whether we should be emitting
*anything* here, but given that this is often used in unattended
environments, coupled with the fact that it's easy to accidentally set
this and then wonder why `pulumi login` is silently returning, led
me to keep it in (for now, at least).
This change stops attempting to pop a web browser in non-interactive
sessions. Instead, the PULUMI_ACCESS_TOKEN environment variable must
be set. Otherwise, any attempt to use the CLI will yield
$ pulumi --non-interactive preview
error: PULUMI_ACCESS_TOKEN must be set for login during non-interactive CLI sessions
This is the behavior we want for Docker-based invocations of the CLI,
and so is part of pulumi/pulumi#1991.
API calls agains the Pulumi service may start setting a new header,
`X-Pulumi-Warning`. The value of this header should be presented to
the user as a warning.
The Service will use this to provide additional information to the
user without having the CLI have to know about every specific warning
path.
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.
* Stylize the preview/update summary differently, so that it stands
out as a section. Also highlight the total changes with bold -- it
turns out this has a similar effect to the bright white colorization,
just without the negative effects on e.g. white terminals.
* Eliminate some verbosity in the phrasing of change summaries.
* Make all heading sections stylized consistently. This includes
the color (bright magenta) and the vertical spacing (always a newline
separating headings). We were previously inconsistent on this (e.g.,
outputs were under "---outputs---"). Now the headings are:
Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.
* Fix an issue where we'd parent things to "global" until the stack
object later showed up. Now we'll simply mock up a stack resource.
* Don't show messages like "no change" or "unchanged". Prior to the
light black removal, these faded into the background of the terminal.
Now they just clutter up the display. Similar to the elision of "*"
for OpSames in a prior commit, just leave these out. Now anything
that's written is actually a meaningful status for the user to note.
* Don't show the "3 info messages," etc. summaries in the Info column
while an update is ongoing. Instead, just show the latest line. This
is more respectful of width -- I often find that the important
messages scroll off the right of my screen before this change.
For discussion:
- I actually wonder if we should eliminate the summary
altogether and always just show the latest line. Or even
blank it out. The summary feels better suited for the
Diagnostics section, and the Status concisely tells us
how a resource's update ended up (failed, succeeded, etc).
- Similarly, I question the idea of showing only the "worst"
message. I'd vote for always showing the latest, and again
leaving it to the Status column for concisely telling the
user about the final state a resource ended up in.
* Stop prepending "info: " to every stdout/stderr message. It adds
no value, clutters up the display, and worsens horizontal usage.
* Lessen the verbosity of update headline messages, so we now instead
of e.g. "Previewing update of stack 'x':", we just say
"Previewing update (x):".
* Eliminate vertical whitespace in the Diagnostics section. Every
independent console.out previously was separated by an entire newline,
which made the section look cluttered to my eyes. These are just
streams of logs, there's no reason for the extra newlines.
* Colorize the resource headers in the Diagnostic section light blue.
Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
Now that we're showing SpecUnimportant as regular text, the extra
"Previewing"/"Updating" line that we show really stands out as being
superfluous. For example, we previously said:
Updating stack 'docker-images'
Performing changes:
...
This change eliminates that second line, so we just have:
Updating stack 'docker-images':
...
* Have backend.ListStacks return a new StackSummary interface
* Update filestake backend to use new type
* Update httpstate backend to use new type
* Update commands to use new type
* lint
* Address PR feedback
* Lint
In order to present a message with a link to a user's access tokens in
the service, we have to convert the API server's URL to the URL for
the console. We understand how to do this for servers that look like
our servers (i.e. their host name is `api.<whatever>`), but in general
we can't do this.
Rewrite the code such that we only print a message about how to find
your access tokens (and offer browser based login) only in cases where
we are able to construct a URL to the console.
Fixes#1930
* Revert "Don't show stack outputs when update fails (#1916)"
This reverts commit e3f89e82aa.
* Be more precise about printing outputs
This commit prints outputs only if they are known to be complete. This
avoids massive red diffs during previews and when component resources
fail to call registerResourceOutputs.
* CR: Clean up large boolean expression and comment
* CR: boolean compromise
* Retire pending deletions at start of plan
Instead of letting pending deletions pile up to be retired at the end of
a plan, this commit eagerly disposes of any pending deletions that were
pending at the end of the previous plan. This is a nice usability win
and also reclaims an invariant that at most one resource with a given
URN is live and at most one is pending deletion at any point in time.
* Rebase against master
* Fix a test issue arising from shared snapshots
* CR feedback
* plan -> replacement
* Use ephemeral statuses to communicate deletions
* Close cancellation source before closing events
The cancellation source logs cancellation messages to the engine event
channel, so we must first close the cancellation source before closing
the channel.
* CR: Fix race in shutdown of signal goroutine
* Don't show stack outputs when update fails
It is basically guaranteed that stack outputs are going to be
meaningless if a plan fails, so this commmit doesn't display them if an
error has occured.
* CR: expand on comment
It is possible for the inputs of a "same" resource to have changed even
if the contents of the input bags are different if the resource's
provider deems the physical change to be semantically irrelevant.
Previously `new` was operating under the assumption that it was always
going to be creating a new project/stack, and would always prompt for
these values. However, we want to be able to use `new` to pull down the
source for an existing stack. This change adds a `--stack` flag to `new`
that can be used to specify an existing stack. If the specified stack
already exists, we won't prompt for the project name/description, and
instead just use the existing stack's values. If `--stack` is specified,
but doesn't already exist, it will just use that as the stack name
(instead of prompting) when creating the stack. `new` also now handles
configuration like `up <url>`: if the stack is a preconfigured empty
stack (i.e. it was created/configured in the Pulumi Console via Pulumi
Button or New Project), we will use the existing stack's config without
prompting. Otherwise we will prompt for config, and just like `up
<url>`, we'll use the existing stack's config values as defaults when
prompting, or if the stack didn't exist, use the defaults from the
template.
Previously `up <url>`'s handling of the project name/description wasn't
correct: it would always automatically use the values from the template
without prompting. Now, just like `new`:
- When updating an existing stack, it will not prompt, and just use the
existing stack's values.
- When creating a new stack, it will prompt for the project
name/description, using the defaults from the template.
This PR consolidates some of the `new`/`up` implementation so it shares
code for this functionality. There's definitely opportunities for a lot
more code reuse, but that cleanup can happen down the line if/when we
have the cycles.
* Don't close eventChannel when panicking
The state of the system is completely unknown when panicking and in
general it's not safe to infer whether or not it is safe to close a
channel when in this tate.
* CR feedback
* Spelling
This change implements the same preview behavior we have for
cloud stacks, in pkg/backend/httpbe, for local stacks, in
pkg/backend/filebe. This mostly required just refactoring bits
and pieces so that we can share more of the code, although it
does still entail quite a bit of redundancy. In particular, the
apply functions for both backends are now so close to being
unified, but still require enough custom logic that it warrants
keeping them separate (for now...)
This simply refactors all the display logic out of the
pkg/backend/filestate package. This helps to gear us up to better unify
this logic between the filestate and httpstate backends.
Furthermore, this really ought to be in its own non-backend,
CLI-specific package, but I'm taking one step at a time here.
This change alters the login prompt slightly, so that it is more
obvious that alternative methods exist.
Before this change, we would say:
$ pulumi login
We need your Pulumi account to identify you.
Enter your access token from https://app.pulumi.com/account
or hit <ENTER> to log in using your browser :
After this change, we say this instead:
$ pulumi login
Manage your Pulumi stacks by logging in.
Run `pulumi login --help` for alternative login options.
Enter your access token from https://app.pulumi.com/account
or hit <ENTER> to log in using your browser :
Also updated the help text to advertise this a bit more prominently.
This renames the backend packages to more closely align with the
new direction for them. Namely, pkg/backend/cloud becomes
pkg/backend/httpstate and pkg/backend/local becomes
pkg/backend/filestate. This also helps to clarify that these are meant
to be around state management and so the upcoming refactoring required
to split out (e.g.) the display logic (amongst other things) will make
more sense, and we'll need better package names for those too.
As part of making the local backend more prominent, this changes a few
aspects of how you use it:
* Simplify how you log into a specific cloud; rather than
`pulumi login --cloud-url <url>`, just say `pulumi login <url>`.
* Use a proper URL scheme to denote local backend usage. We have chosen
file://, since the REST API backend is of course always https://.
This means that you can say `pulumi login file://~` to use the local
backend, with state files stored in your home directory. Similarly,
we support `pulumi login file://.` for the current directory.
* Add a --local flag to the login command, to make local logins a
bit easier in the common case of using your home directory. Just say
`pulumi login --local` and it is sugar for `pulumi login file://~`.
* Print the URL for the backend after logging in; for the cloud,
this is just the user's stacks page, and for the local backend,
this is the path to the user's stacks directory on disk.
* Tidy up the documentation for login a bit to be clearer about this.
This is part of pulumi/pulumi#1818.
The existing message put the URL to visit and some explanation text on
the same line, which makes it a little harder to copy only the URL
into a browser. If this extra text ends up being copied as well as the
URL it can lead to failures later, when we try to decode the query
string as part of the OAuth flow.
It's easy enough to fix by just putting the URL on its own line, split
off from the text itself.
Fixes#1832
This changes two things:
1) Per PR feedback, change the text to "N changes found during refresh."
2) Colorize the parenthetical "No resources will be modified" message;
it looked a little odd being plain colored.
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
- Create all refresh steps before issuing any. This is important as the
state update loop expects all steps to exist.
- Check for cancellation later in the refresher.
This also fixes races in the SnapshotManager and the test journal that
could cause panics during cancellation.
The wording for refresh doesn't accurately convey that the operations
aren't actually mutating your resources, but instead are simply changing
your checkpoint state. This change (hopefully) helps in two ways:
First, put text just before the prompt:
Do you want to perform this refresh?
No resources will be modified as part of this refresh; just your stack's state will be.
Second, alter the summary ever-so-slightly, from:
info: 2 changes performed:
~ 2 resources updated
3 resources unchanged
to:
info: 2 changes refreshed:
~ 2 resources updated
3 resources unchanged
This reads just slightly better, and removes any sense of panic I might
have otherwise had that my refresh just did something wrong.
As I was in here, since I had to pass UpdateKind information to new
places, I cleaned up the situation where we had three mostly-similar
enums (but which actually diverged) and several areas where we were
using untyped strings for this same information. Now there's just one.
This fixespulumi/pulumi#1551.
Replace the Source-based implementation of refresh with a phase that
runs as the first part of plan execution and rewrites the snapshot in-memory.
In order to fit neatly within the existing framework for resource operations,
these changes introduce a new kind of step, RefreshStep, to represent
refreshes. RefreshSteps operate similar to ReadSteps but do not imply that
the resource being read is not managed by Pulumi.
In addition to the refresh reimplementation, these changes incorporate those
from #1394 to run refresh in the integration test framework.
Fixes#1598.
Fixespulumi/pulumi-terraform#165.
Contributes to #1449.
The fact that we include the `:config:` portion of a configuration
name when sending to the service is an artifact of history. It was
needed back when we needed to support deploying using a new CLI
against a PPC that had older packages embeded in it.
We don't have that sort of problem anymore, so we can stop sending
this data.
A checkpoint write is unnecessary if it does not change the semantics of
the data currently stored in the checkpoint. We currently perform
unnecessary checkpoint writes in two cases:
- Same steps where no aspect of the resource's state has changed
- Replace steps, which exist solely for display purposes
The former case is particularly bothersome, as it is rather common to
run updates--especially in CI--that consist largely/entirely of these
same steps.
These changes eliminate the checkpoint writes we perform in these two
cases. Some care is needed to ensure that we continue to write the
checkpoint in the case of same steps that do represent meaningful
changes (e.g. changes to a resource's output properties or
dependencies).
Fixes#1769.
* Add a list of in-flight operations to the deployment
This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.
The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.
To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.
At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.
* CR: Multi-line string literals, renaming in-flight -> pending
* CR: Add enum to apitype for operation type, also name status -> type for clarity
* Fix the yaml type
* Fix missed renames
* Add implementation for lifecycle_test.go
* Rebase against master
* Initial support for passing URLs to `new` and `up`
This PR adds initial support for `pulumi new` using Git under the covers
to manage Pulumi templates, providing the same experience as before.
You can now also optionally pass a URL to a Git repository, e.g.
`pulumi new [<url>]`, including subdirectories within the repository,
and arbitrary branches, tags, or commits.
The following commands result in the same behavior from the user's
perspective:
- `pulumi new javascript`
- `pulumi new https://github.com/pulumi/templates/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates/javascript`
To specify an arbitrary branch, tag, or commit:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates/javascript`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates/javascript`
Branches and tags can include '/' separators, and `pulumi` will still
find the right subdirectory.
URLs to Gists are also supported, e.g.:
`pulumi new https://gist.github.com/justinvp/6673959ceb9d2ac5a14c6d536cb871a6`
If the specified subdirectory in the repository does not contain a
`Pulumi.yaml`, it will look for subdirectories within containing
`Pulumi.yaml` files, and prompt the user to choose a template, along the
lines of how `pulumi new` behaves when no template is specified.
The following commands result in the CLI prompting to choose a template:
- `pulumi new`
- `pulumi new https://github.com/pulumi/templates/templates`
- `pulumi new https://github.com/pulumi/templates/tree/master/templates`
- `pulumi new https://github.com/pulumi/templates/tree/HEAD/templates`
Of course, arbitrary branches, tags, or commits can be specified as well:
- `pulumi new https://github.com/pulumi/templates/tree/<branch>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<tag>/templates`
- `pulumi new https://github.com/pulumi/templates/tree/<commit>/templates`
This PR also includes initial support for passing URLs to `pulumi up`,
providing a streamlined way to deploy installable cloud applications
with Pulumi, without having to manage source code locally before doing
a deployment.
For example, `pulumi up https://github.com/justinvp/aws` can be used to
deploy a sample AWS app. The stack can be updated with different
versions, e.g.
`pulumi up https://github.com/justinvp/aws/tree/v2 -s <stack-to-update>`
Config values can optionally be passed via command line flags, e.g.
`pulumi up https://github.com/justinvp/aws -c aws:region=us-west-2 -c foo:bar=blah`
Gists can also be used, e.g.
`pulumi up https://gist.github.com/justinvp/62fde0463f243fcb49f5a7222e51bc76`
* Fix panic when hitting ^C from "choose template" prompt
* Add description to templates
When running `pulumi new` without specifying a template, include the template description along with the name in the "choose template" display.
```
$ pulumi new
Please choose a template:
aws-go A minimal AWS Go program
aws-javascript A minimal AWS JavaScript program
aws-python A minimal AWS Python program
aws-typescript A minimal AWS TypeScript program
> go A minimal Go program
hello-aws-javascript A simple AWS serverless JavaScript program
javascript A minimal JavaScript program
python A minimal Python program
typescript A minimal TypeScript program
```
* React to changes to the pulumi/templates repo.
We restructured the `pulumi/templates` repo to have all the templates in the root instead of in a `templates` subdirectory, so make the change here to no longer look for templates in `templates`.
This also fixes an issue around using `Depth: 1` that I found while testing this. When a named template is used, we attempt to clone or pull from the `pulumi/templates` repo to `~/.pulumi/templates`. Having it go in this well-known directory allows us to maintain previous behavior around allowing offline use of templates. If we use `Depth: 1` for the initial clone, it will fail when attempting to pull when there are updates to the remote repository. Unfortunately, there's no built-in `--unshallow` support in `go-git` and setting a larger `Depth` doesn't appear to help. There may be a workaround, but for now, if we're cloning the pulumi templates directory to `~/.pulumi/templates`, we won't use `Depth: 1`. For template URLs, we will continue to use `Depth: 1` as we clone those to a temp directory (which gets deleted) that we'll never try to update.
* List available templates in help text
* Address PR Feedback
* Don't show "Installing dependencies" message for `up`
* Fix secrets handling
When prompting for config, if the existing stack value is a secret, keep it a secret and mask the prompt. If the template says it should be secret, make it a secret.
* Fix ${PROJECT} and ${DESCRIPTION} handling for `up`
Templates used with `up` should already have a filled-in project name and description, but if it's a `new`-style template, that has `${PROJECT}` and/or `${DESCRIPTION}`, be helpful and just replace these with better values.
* Fix stack handling
Add a bool `setCurrent` param to `requireStack` to control whether the current stack should be saved in workspace settings. For the `up <url>` case, we don't want to save. Also, split the `up` code into two separate functions: one for the `up <url>` case and another for the normal `up` case where you have workspace in your current directory. While we may be able to combine them back into a single function, right now it's a bit cleaner being separate, even with some small amount of duplication.
* Fix panic due to nil crypter
Lazily get the crypter only if needed inside `promptForConfig`.
* Embellish comment
* Harden isPreconfiguredEmptyStack check
Fix the code to check to make sure the URL specified on the command line matches the URL stored in the `pulumi:template` config value, and that the rest of the config from the stack satisfies the config requirements of the template.
* Serialize SourceEvents coming from the refresh source
The engine requires that a source event coming from a source be "ready
to execute" at the moment that it is sent to the engine. Since the
refresh source sent all goal states eagerly through its source iterator,
the engine assumed that it was legal to execute them all in parallel and
did so. This is a problem for the snapshot, since the snapshot expects
to be in an order that is a legal topological ordering of the dependency
DAG.
This PR fixes the issue by sending refresh source events one-at-a-time
through the refresh source iterator, only unblocking to send the next
step as soon as the previous step completes.
* Fix deadlock in refresh test
* Fix an issue where the engine "completed" steps too early
By signalling that a step is done before committing the step's results
to the snapshot, the engine was left with a race where dependent
resources could find themselves completely executed and committed before
a resource that they depend on has been committed.
Fixespulumi/pulumi#1726
* Fix an issue with Replace steps at the end of a plan
If the last step that was executed successfully was a Replace, we could
end up in a situation where we unintentionally left the snapshot
invalid.
* Add a test
* CR: pass context.Context as first parameter to Iterate
* CR: null->nil
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
This change lets us set runtime specific options in Pulumi.yaml, which
will flow as arguments to the language hosts. We then teach the nodejs
host that when the `typescript` is set to `true` that it should load
ts-node before calling into user code. This allows using typescript
natively without an explicit compile step outside of Pulumi.
This works even when a tsconfig.json file is not present in the
application and should provide a nicer inner loop for folks writing
typescript (I'm pretty sure everyone has run into the "but I fixed
that bug! Why isn't it getting picked up? Oh, I forgot to run tsc"
problem.
Fixes#958
* Protobuf changes to record dependencies for read resources
* Add a number of tests for read resources, especially around replacement
* Place read resources in the snapshot with "external" bit set
Fixespulumi/pulumi#1521. This commit introduces two new step ops: Read
and ReadReplacement. The engine generates Read and ReadReplacement steps
when servicing ReadResource RPC calls from the language host.
* Fix an omission of OpReadReplace from the step list
* Rebase against master
* Transition to use V2 Resources by default
* Add a semantic "relinquish" operation to the engine
If the engine observes that a resource is read and also that the
resource exists in the snapshot as a non-external resource, it will not
delete the resource if the IDs of the old and new resources match.
* Typo fix
* CR: add missing comments, DeserializeDeployment -> DeserializeDeploymentV2, ID check