This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
When reading a configuration value from standard in and standard in is
not connected to a terminal, read until EOF and then trim a trailing
newline (if present) to get the value
Fixes#822
Previously, when uploading a projectm to the service, we would only
upload the folder rooted by the Pulumi.yaml for that project. This
worked well, but it meant that customers needed to structure their
code in a way such that Pulumi.yaml was always as the root of their
project, and if they wanted to share common files between two projects
there was no good solution for doing this.
This change introduces an optional piece of metadata, named context,
that can be added to Pulumi.yaml, which allows controlling the root
folder used for computing the root folder to archive from. When it is
set, it is combined with the location of the Pulumi.yaml file for the
project we are uploading and that folder is uses as the root of what
we upload to the service.
Fixes: #574
Previously, the `pulumi` tool did not show any indication of progress
when doing a deployment. Combined with the fact that we do not create
resources in parallel it meant that sometime `pulumi` would appear to
hang, when really it was just waiting on some resource to be created
in AWS. In addition, some AWS resources take a long time to create and
CI systems like travis will kill the job if there is no output. This
causes us (and our customers) to have to do crazy dances where we
launch shell scripts that write a dot to the console every once in a
while so we don't get killed. While we plan to overhaul the output
logic (see #617), we take a first step towards interactivity by simply
having a nice little spinner (in the interactive case) and when run
non interactive have `pulumi` print a message that it is still
working.
Fixes#794
In travis, we've seen cases where writes to our standard streams
results in an error like: `/dev/stderr: resource temporarily
unavailable` which causes the tests to panic.
Now, in a perfect world, writes to /dev/stderr would not fail in this
way, but we do not live in a perfect world. Other processes on the
machine may make stderr/stdout non-blocking. We've are now seeing this
failure in Travis more often and it is masking real Pulumi failures
we want to debug.
This change incorporates feedback on https://github.com/pulumi/pulumi/pull/764,
in addition to refactoring the retry logic to use our retry framework rather
than hand-rolling it in the REST API code. It's a minor improvement, but at
least lets us consolidate some of this logic which we'll undoubtedly use more
of over time.
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw). The events from the service, however, are still
boolean-valued. This changes the message payload to carry the full values.
Part of the work to make it easier to tests of diff output. Specifically, we now allow users to pass --color=option for several pulumi commands. 'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).
The meaning of these flags are:
1. auto: colorize normally, unless in --debug
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes. This is for testing purposes and ensures we can have test for this stuff across platform.
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update). This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.
If the process we are trying to kill has already exited, don't treat
this as an error. This can happen when we snapshot the process tree
before the process exits but it has exited by the time we get to
trying to kill it.
Fixes#654
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
This PR just wires the `Package.Main` field to the Pulumi Service (and in subsequent PRs, the `pulumi-service` and `pulumi-ppc` repos).
@joeduffy , should we just upload the entire `package.Package` type with the `UpdateProgramRequest` type? I'm not sure we want to treat that type as part of part of our public API surface area. But on the other hand, we'll need to mirror relevant fields in N places if we don't.
Previously, we were inconsistent on how we handled argument validation
in the CLI. Many commands used cobra.Command's Args property to
provide a validator if they took arguments, but commands which did not
rarely used cobra.NoArgs to indicate this.
This change does two things:
1. Introduce `cmdutil.ArgsFunc` which works like `cmdutil.RunFunc`, it
wraps an existing cobra type and lets us control the behavior when an
arguments validator fails.
2. Ensure every command sets the Args property with an instance of
cmdutil.ArgsFunc. The cmdutil package defines wrapers for all the
cobra validators we are using, to prevent us from having to spell out
`cmduitl.ArgsFunc(...)` everywhere.
Fixes#588
Outside of `.pulumiignore` we support a few "default" excludes that
try to push folks towards a pit of succes.
Previously, there was no way to opt out of these, which would be bad
if our huristics caused something youto really care about to be
elided. With this change, we add an optional setting in Pulumi.yaml
that allows you to opt out of this behavior.
As part of the work, I changed .git to be one of these "default"
excludes instead of it only happening if you had a .pulumiignore file
in a directory
Previously, we would archive every file in node_modules, however we
only actually needed the production modules. This change adds an
implicit ignorer when walking a directory that has a package.json file
in to exclude stuff under `node_modules` which is not pulled in via an
entry in the `dependencies` section of the package.json.
This change will also cause the archives that we upload to not include
either pulumi or any @pulumi/... packages, since they are not listed
in the dependencies section ofa package.json. This is fine, since we
linked in the versions we have in the cloud anyway. When we move to
using npm for these packages (instead of linking) then they will be
included.
This won't be needed once pulumi/pulumi-ppc#95 has landed and we've
updated our PPCs, but for now always add directory entires (even if
the files would be excluded) for every directory.
When deploying a project via the Pulumi.com service, we have to upload
the entire "context" of your project to Pulumi.com. The context of the
program is all files in the directory tree rooted by the `Pulumi.yaml`
file, which will often contain stuff we don't want to upload, but
previously we had no control over what would be updated (and so folks
would do hacky things like delete folders before running `pulumi
update`).
This change adds support for `.pulumiignore` files which should behave
like `.gitignore`. In addition, we were not previously compressing
files when we added them to the zip archive we uploaded and now.
By default, every .pulumiignore file is treated as if it had an
exclusion for `.git/` at the top of the file (users can override this
by adding an explicit `!.git/` to their file) since it is very
unlikely for there to ever be a reason to upload the .git folder to
the service.
Fixespulumi/pulumi-service#122
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)
Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.
The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.
The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.
The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.
Fixes#551.
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.
We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.
The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.
We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.
In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.
We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.
We currently support sending tracing data to a Zipkin-compatible endpoint. This has been validated with both Zipkin and Jaeger UIs.
We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface. There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process
Fixes#504
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).
As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.
We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow.
Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
Trapping these signals hijacks the usual termination behavior for any
program that happens to link in the engine and perform an operation
that starts a gRPC server. These servers already provide a cancellation
mechanism via a `cancel` channel parameter; if the using program wants
to gracefully terminate these servers on some signal, it is responsible
for providing that behavior.
This also fixes a leak in which the goroutine responsible for waiting on
a server's signals and cancellation channel would never exit.
Now that config is stored in Pulumi.yaml, we need to mimic the behavior
around .pulumi/ while edits are applied. This will ensure that config
values carry forward from the original program settings.
This fixespulumi/pulumi-aws#48.
This improves a few things about assets:
* Compute and store hashes as input properties, so that changes on
disk are recognized and trigger updates (pulumi/pulumi#153).
* Issue explicit and prompt diagnostics when an asset is missing or
of an unexpected kind, rather than failing late (pulumi/pulumi#156).
* Permit raw directories to be passed as archives, in addition to
archive formats like tar, zip, etc. (pulumi/pulumi#240).
* Permit not only assets as elements of an archive's member list, but
also other archives themselves (pulumi/pulumi#280).
If --logtostderr is passed, and an unhandled error occurs that
was produced by the github.com/pkg/errors package, we will now
emit the stack trace. Much easier for debugging purposes.
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
Instead of binding on 0.0.0.0 (which will listen on every interface)
let's only listen on localhost. On windows, this both makes the
connection Just Work and also prevents the Windows Firewall from
blocking the listen (and displaying UI saying it has blocked an
application and asking if the user should allow it)
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This change introduces a --debug option to the plan, deploy, and
destroy commands. Unlike --logtostderr, which merely hooks into the
copious Glogging that we perform (and is therefore meant for developers
of the tools themselves and not end users), --debug hooks into the
user-facing debug stream. This now includes any debug messages coming
from the resource providers as they perform their tasks.
The old contract library tried to be glog-friendly in its failfast behavior.
It turns out glog seldom does the right thing when goroutines are involved
(which, as of last sprint, they now are). We already had issues with stacks
not getting printed when --logtostderr was turned on, and the code tried
to work around this; but this still didn't work for the goroutines case.
All of this seems like way too much cleverness. Let's just use Go panics.