Commit graph

995 commits

Author SHA1 Message Date
joeduffy
86267b86b9 Merge root stack changes with parenting 2017-11-20 10:08:59 -08:00
joeduffy
5dc4b0b75c Switch to parent pointers; display components nicely
This change switches from child lists to parent pointers, in the
way resource ancestries are represented.  This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).

This lets us show a more parent/child display in the output when
doing planning and updating.  For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:

    * cloud:timer:Timer: (same)
          [urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
        * cloud:function:Function: (same)
              [urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
            * aws:serverless:Function: (same)
                  [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
                ~ aws:lambda/function:Function: (modify)
                      [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
                      [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
                    - code            : archive(assets:2092f44) {
                        // etc etc etc

Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.

I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).

Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
2017-11-20 09:07:53 -08:00
Luke Hoban
329d70fba2 Rename cloud operations file 2017-11-19 22:31:30 -08:00
Luke Hoban
06f0559849 Refactoring of OperationsProvider code 2017-11-19 22:28:49 -08:00
Luke Hoban
0079a38ee0 Transition to resource tree 2017-11-19 17:42:54 -08:00
Luke Hoban
3f0d511cd8 Simplify log collection 2017-11-19 17:42:54 -08:00
joeduffy
8e01135572 Log the project and stack names 2017-11-19 10:16:47 -08:00
joeduffy
a591775409 Add a missing copyright header 2017-11-19 08:08:30 -08:00
joeduffy
5c4c7064ba Fix a hanging ":" in the output 2017-11-19 08:08:30 -08:00
Luke Hoban
96e4b74b15
Support for stack outputs (#581)
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.

This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs.  That resources has properties for each top-level export in the Node.js script.

Running `pulumi stack` will display the current value of these outputs.
2017-11-17 15:22:41 -08:00
Joe Duffy
df7114aca2
Merge pull request #578 from pulumi/FixSnapshot
Fix plan snapshotting.
2017-11-16 13:11:58 -08:00
Matt Ellis
5fc226a952 Change configuration verbs for getting and setting values
A handful of UX improvments for config:

 - `pulumi config ls` has been removed. Now, `pulumi config` with no
   arguments prints the table of configuration values for a stack and
   a new command `pulumi config get <key>` prints the value for a
   single configuration key (useful for scripting).
 - `pulumi config text` and `pulumi config secret` have been merged
   into a single command `pulumi config set`. The flag `--secret` can
   be used to encrypt the value we store (like `pulumi config secret`
   used to do).
 - To make it obvious that setting a value with `pulumi config set` is
   in plan text, we now echo a message back to the user saying we
   added the configuration value in plaintext.

Fixes #552
2017-11-16 11:39:28 -08:00
Joe Duffy
77460a7dc0
Plumb the project name correctly (#583)
This change fixes getProject to return the project name, as
originally intended.  (One line was missing.)

It also adds an integration test for this.

Fixes pulumi/pulumi#580.
2017-11-16 08:15:56 -08:00
Joe Duffy
98ef0c4bb5
Allow overriding a Pulumi.yaml's entrypoint (#582)
Because the Pulumi.yaml file demarcates the boundary used when
uploading a program to the Pulumi.com service at the moment, we
have trouble when a Pulumi program uses "up and over" references.
For instance, our customer wants to build a Dockerfile located
in some relative path, such as `../../elsewhere/`.

To support this, we will allow the Pulumi.yaml file to live
somewhere other than the main Pulumi entrypoint.  For example,
it can live at the root of the repo, while the Pulumi program
lives in, say, `infra/`:

    Pulumi.yaml:
    name: as-before
    main: infra/

This fixes pulumi/pulumi#575.  Further work can be done here to
provide even more flexibility; see pulumi/pulumi#574.
2017-11-16 07:49:07 -08:00
pat@pulumi.com
1d9fa045cb Fix plan snapshotting.
When producing a snapshot for a plan, we have two resource DAGs. One of
these is the base DAG for the plan; the other is the current DAG for the
plan. Any resource r may be present in both DAGs. In order to produce a
snapshot, we need to merge these DAGs such that all resource
dependencies are correctly preserved. Conceptually, the merge proceeds
as follows:

- Begin with an empty merged DAG.
- For each resource r in the current DAG, insert r and its outgoing
  edges into the merged DAG.
- For each resource r in the base DAG:
    - If r is in the merged DAG, we are done: if the resource is in the
      merged DAG, it must have been in the current DAG, which accurately
      captures its current dependencies.
    - If r is not in the merged DAG, insert it and its outgoing edges
      into the merged DAG.

Physically, however, each DAG is represented as list of resources
without explicit dependency edges. In place of edges, it is assumed that
the list represents a valid topological sort of its source DAG. Thus,
any resource r at index i in a list L must be assumed to be dependent on
all resources in L with index j s.t. j < i. Due to this representation,
we implement the algorithm above as follows to produce a merged list
that represents a valid topological sort of the merged DAG:

- Begin with an empty merged list.
- For each resource r in the current list, append r to the merged list.
  r must be in a correct location in the merged list, as its position
  relative to its assumed dependencies has not changed.
- For each resource r in the base list:
    - If r is in the merged list, we are done by the logic given in the
      original algorithm.
    - If r is not in the merged list, append r to the merged list. r
      must be in a correct location in the merged list:
        - If any of r's dependencies were in the current list, they must
          already be in the merged list and their relative order w.r.t.
          r has not changed.
        - If any of r's dependencies were not in the current list, they
          must already be in the merged list, as they would have been
          appended to the list before r.

Prior to these changes, we had been performing these operations in
reverse order: we would start by appending any resources in the old list
that were not in the new list, then append the whole of the new list.
This caused out-of-order resources when a program that produced pending
deletions failed to run to completion.

Fixes #572.
2017-11-15 16:21:42 -08:00
Chris Smith
84cd810112
Move program uploads to the CLI (#571)
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)

Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.

The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.

The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
2017-11-15 13:27:28 -08:00
Luke Hoban
4730ab5a0b
Add integration test support for relative work dir (#567)
Adds the ability to use a working directory nested within an example folder.
2017-11-14 21:02:47 -08:00
Pat Gavlin
9c42789359
Merge pull request #562 from pulumi/RawDiagMessages
Stop formatting output that should be raw.
2017-11-14 13:20:57 -08:00
Matt Ellis
f5dc3a2b53 Add a very barebones pulumi logs command
This is the smallest possible thing that could work for both the local
development case and the case where we are targeting the Pulumi
Service.

Pulling down the pulumiframework and component packages here is a bit
of a feel bad, but we plan to rework the model soon enough to a
provider model which will remove the need for us to hold on to this
code (and will bring back the factoring where the CLI does not have
baked in knowledge of the Pulumi Cloud Framework).

Fixes #527
2017-11-14 12:26:55 -08:00
Pat Gavlin
234f0816e5 Stop formatting output that should be raw.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.

The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.

Fixes #551.
2017-11-14 11:26:41 -08:00
Luke Hoban
a350b1654f
Report test run data and timing to S3 (#560)
Add the ability to upload data and timing for test runs to S3. Uploaded data is designed to be queried via a service like AWS Athena and these queries can then be imported into BI tools (Excel, QuickSight, PowerBI, etc.)

Initially hook this up to the `minimal` test as a baseline.
2017-11-14 09:33:22 -08:00
Pat Gavlin
28579eba94
Rework asset identity and exposure of old assets. (#548)
Note: for the purposes of this discussion, archives will be treated as
assets, as their differences are not particularly meaningful.

Currently, the identity of an asset is derived from the hash and the
location of its contents (i.e. two assets are equal iff their contents
have the same hash and the same path/URI/inline value). This means that
changing the source of an asset will cause the engine to detect a
difference in the asset even if the source's contents are identical. At
best, this leads to inefficiencies such as unnecessary updates. This
commit changes asset identity so that it is derived solely from an
asset's hash. The source of an asset's contents is no longer part of
the asset's identity, and need only be provided if the contents
themselves may need to be available (e.g. if a hash does not yet exist
for the asset or if the asset's contents might be needed for an update).

This commit also changes the way old assets are exposed to providers.
Currently, an old asset is exposed as both its hash and its contents.
This allows providers to take a dependency on the contents of an old
asset being available, even though this is not an invariant of the
system. These changes remove the contents of old assets from their
serialized form when they are passed to providers, eliminating the
ability of a provider to take such a dependency. In combination with the
changes to asset identity, this allows a provider to detect changes to
an asset simply by comparing its old and new hashes.

This is half of the fix for [pulumi/pulumi-cloud#158]. The other half
involves changes in [pulumi/pulumi-terraform].
2017-11-12 11:45:13 -08:00
pat@pulumi.com
8c7932c1b5 Fix an archive-related bug.
Properly skip the .pulumi when dealing with directory-backed archives.
2017-11-10 19:56:25 -08:00
Pat Gavlin
db2f802d34 Log actual and expected sizes on ErrWriteTooLong.
If a blob's reported size is incorrect, `archiveTar` may attempt to
write more bytes to an entry than it reported in that entry's header.
These changes provide a bit more context with the resulting error as
well as removing an unnecessary `LimitReader`.
2017-11-10 11:46:49 -08:00
Luke Hoban
af5298f4aa
Initial work on tracing support (#521)
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.

We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.

The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.

We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.

In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.

We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.

We currently support sending tracing data to a Zipkin-compatible endpoint.  This has been validated with both Zipkin and Jaeger UIs.

We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface.  There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
2017-11-08 17:08:51 -08:00
pat@pulumi.com
0b6b211e69 Do not delete temp directories during tests.
This interacts badly with the deferred destroy.
2017-11-08 16:51:55 -08:00
Pat Gavlin
d01465cf6d
Make archive assets stream their contents. (#542)
We currently have a nasty issue with archive assets wherein they read
their entire contents into memory each time they are accessed (e.g. for
hashing or translation). This interacts badly with scenarios that
place large amounts of data in an archive: aside from limiting the size
of an archive the engine can handle, it also bloats the engine's memory
requirements. This appears to have caused issues when running the PPC in
AWS: evidence suggests that the very high peak memory requirements this
approach implies caused high swap traffic that impacted the service's
availability.

In order to fix this issue, these changes move archives onto a
streaming read model. In order to read an archive, a user:
- Opens the archive with `Archive.Open`. This returns an ArchiveReader.
- Iterates over its contents using `ArchiveReader.Next`. Each returned
  blob must be read in full between successive calls to
  `ArchiveReader.Next`. This requirement is essentially forced upon us
  by the streaming nature of TAR archives.
- Closes the ArchiveReader with `ArchiveReader.Close`.

This model does not require that the complete contents of the archive or
any of its constituent files are in memory at any given time.

Fixes #325.
2017-11-08 15:28:41 -08:00
Chris Smith
5085c7eef8
Rework polling (#531)
Rework the polling code for `pulumi` when targeting hosted scenarios. (i.e. absorb https://github.com/pulumi/pulumi-service/pull/282.) We now return an `updateID` for update, preview, and destroy. And use bake that into the URL.

This PR also silently ignores 504 errors which are now returned from the Pulumi Service if the PPC request times out.

Combined, this should ameliorate some of the symptoms we see from https://github.com/pulumi/pulumi-ppc/issues/60. Though we'll continue to try to identify and fix the root cause.
2017-11-06 14:12:20 -08:00
Matt Ellis
098b2b8401 Remove integration test working folder if tests pass 2017-11-06 11:50:21 -08:00
Matt Ellis
aa80b7dee9 Don't pass PULUMI_API when running integration tests
We are not ready to start running integration tests against
Pulumi.com, but `pulumi` uses `PULUMI_API` to decide what backend to
target. Remove it when running `pulumi` in the integration tests so we
always get the local behavior.

pulumi/pulumi#471 tracks the work to actually be able to choose what
backend you want when an integration test runs.
2017-11-06 11:50:16 -08:00
Joe Duffy
d5ef7a8538
Exit early during integration tests (#530)
This changes the overall flow to exit early when a failure occurs.
We will try to clean up if we believe to have gotten far enough to
actually have provisioned any resources.
2017-11-06 09:04:38 -08:00
joeduffy
8fd4770dcf Cap test/host names at 10 chars, to keep names reasonable 2017-11-06 08:22:45 -08:00
joeduffy
145cdf1f71 Also use stack names as temp dir names 2017-11-06 07:50:28 -08:00
Joe Duffy
68e23f03c1
Add hostname and testdir to integration test stack names (#529)
This should help us attribute resources allocated (and orphaned) by
tests a little easier, including tracking down any leaking resources.
2017-11-06 07:23:07 -08:00
joeduffy
4d19a71efb Remove extra newline to shorten logs 2017-11-05 17:52:40 -08:00
joeduffy
7b045bd7af Retain stdout/stderr for integration test commands
...and only print it upon error, as part of pulumi/pulumi#518.
2017-11-05 17:28:12 -08:00
Joe Duffy
fbf13ec4d7
Use full state during updates (#526)
In our existing code, we only use the input state for old and new
properties.  This is incorrect and I'm astonished we've been flying
blind for so long here.  Some resources require the output properties
from the prior operation in order to perform updates.  Interestingly,
we did correclty use the full synthesized state during deletes.

I ran into this with the AWS Cloudfront Distribution resource,
which requires the etag from the prior operation in order to
successfully apply any subsequent operations.
2017-11-03 19:45:19 -07:00
Luke Hoban
13b10490c2
Only call Configure on a package once (#520)
We were previously calling configure on each package once per time it was mentioned in the config.  We only need to call it once ever as we pass the full bag of relevent config through on that one call.
2017-11-03 13:52:59 -07:00
Joe Duffy
0290283e6f
Skip unknown properties (#524)
It's legal and possible for undefined properties to show up in
objects, since that's an idiomatic JavaScript way of initializing
missing properties.  Instead of failing for these during deployment,
we should simply skip marshaling them to Terraform and let it do
its thing as usual.  This came up during our customer workload.
2017-11-03 13:40:15 -07:00
joeduffy
5bf8b5cd3b Fix an error message typo 2017-11-03 11:20:33 -07:00
Pat Gavlin
ab9eae439b Update the type of EventIndex to match the API./service type. 2017-11-03 11:02:22 -07:00
Matt Ellis
44d432a559 Suport workspace local configuration and use it by default
Previously, we stored configuration information in the Pulumi.yaml
file. This was a change from the old model where configuration was
stored in a special section of the checkpoint file.

While doing things this way has some upsides with being able to flow
configuration changes with your source code (e.g. fixed values for a
production stack that version with the code) it caused some friction
for the local development scinerio. In this case, setting
configuration values would pend changes to Pulumi.yaml and if you
didn't want to publish these changes, you'd have to remember to remove
them before commiting. It also was problematic for our examples, where
it was not clear if we wanted to actually include values like
`aws:config:region` in our samples.  Finally, we found that for our
own pulumi service, we'd have values that would differ across each
individual dev stack, and publishing these values to a global
Pulumi.yaml file would just be adding noise to things.

We now adopt a hybrid model, where by default configuration is stored
locally, in the workspace's settings per project. A new flag `--save`
tests commands to actual operate on the configuration information
stored in Pulumi.yaml.

With the following change, we have have four "slots" configuration
values can end up in:

1. In the Pulumi.yaml file, applies to all stacks
2. In the Pulumi.yaml file, applied to a specific stack
3. In the local workspace.json file, applied to all stacks
4. In the local workspace.json file, applied to a specific stack

When computing the configuration information for a stack, we apply
configuration in the above order, overriding values as we go
along.

We also invert the default behavior of the `pulumi config` commands so
they operate on a specific stack (i.e. how they did before
e3610989). If you want to apply configuration to all stacks, `--all`
can be passed to any configuration command.
2017-11-02 13:05:01 -07:00
Matt Ellis
07b4d9b36b Add Pulumi.com backend, unify cobra Commands
As part of the unification it became clear where we did not support
features that we had for the local backend. I opened issues and added
comments.
2017-11-02 11:19:00 -07:00
Chris Smith
8b99c9a63f Bring in API types from the Pulumi Service 2017-11-02 11:19:00 -07:00
Chris Smith
76f5e832c2 Add 'pulumi login' test 2017-11-02 11:19:00 -07:00
Chris Smith
dfcc165840
Update API types to match HEAD (#509)
Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.

The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.

This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
2017-10-31 15:03:07 -07:00
Matt Ellis
67426833a4
Merge pull request #505 from pulumi/FixWindows
Get windows integration tests working again
2017-10-31 00:19:20 -07:00
Matt Ellis
fd64125daf Aggregate process termination errors 2017-10-30 23:35:11 -07:00
Matt Ellis
95ee6d85f6 Kill plugin child processes as well on Windows
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process

Fixes #504
2017-10-30 23:22:14 -07:00
Matt Ellis
9ac7af2aa7
Merge pull request #499 from pulumi/misc-init-fixes
Fix up some pulumi init error cases
2017-10-30 15:18:41 -07:00
Chris Smith
defe0038ca
Add integration tests for pulumi CLI (#493)
This PR adds integration tests for exercising `pulumi init` and the `pulumi stack *` commands. The only functional change is merging in https://github.com/pulumi/pulumi/pull/492 , which I found while writing the tests and (of course 😁 ) wrote a regression for.

To do this I introduce a new test driver called `PulumiProgram`. This is different from the one found in the `testing/integration`package in that it doesn't try to prescribe a workflow. It really just deals in executing commands, and confirming strings are in the output.

While it doesn't hurt to have more tests for `pulumi`, my motivation here was so that I could reuse these to ensure I keep the same behavior for my pending PR that implements Cloud-enabled variants of some of these commands.
2017-10-30 15:17:13 -07:00
Matt Ellis
62efc1b441 Fix up some pulumi init error cases
- When looking for a `.pulumi` folder to reuse, only consider ones
  that have a `settings.json` in them.
- Make the error message when there is no repository a little more
  informative by telling someone to run `pulumi init`
2017-10-30 11:40:32 -07:00
Chris Smith
bd5e54d63e
Fix panic when using commands w/o Pulumi.yaml (#492)
Calls to `NewProjectWorkspace` will panic if there is no Pulumi.yaml found in the folder hierarchy. Simple repo:

```
git init
pulumi init
pulumi stack init x
```

`DetectPackage` will search until it gets to the `.pulumi` directory or finds a `Pulumi.yaml` file. In the case of the former, we pass "" to `LoadPackage` which then asserts because the file doesn't exist.

We now detect this condition and surface an error to the user. The error text is patterned after running a git command when there is no .git folder found:

fatal: Not a git repository (or any of the parent directories): .git
2017-10-28 18:07:03 -07:00
joeduffy
6d74b3ca27 Add minimal runtime verification test
This adds a minimal runtime verification test to our basic
test suite, to at least exercise the portions of the integration
test library that load up and parse checkpoint files.
2017-10-27 20:03:38 -07:00
joeduffy
7835305b82 Fix where integration tests look for checkpoints 2017-10-27 19:42:17 -07:00
Matt Ellis
3f1197ef84 Move .pulumi to root of a repository
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.

When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.

We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.

When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
2017-10-27 11:46:21 -07:00
Chris Smith
9b3dd54385 Enable 'pulumi stack init' to the Cloud (#480)
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.

I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)

Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
2017-10-26 22:14:56 -07:00
CyrusNajmabadi
44d95666e7 Make it possible to do extra validation tests after applying an edit. (#476)
* Make it possible to do extra validation tests after applying an edit.

* Chane name.
2017-10-26 16:01:28 -07:00
joeduffy
b06a708f11 Use Fprint, not Fprintf, so we don't format messages
I noticed in our Docker builds, we often end up seeing %(MISSING)!
style messages, which were an indication we were trying to format
them.  The reason was the presence of %c's in the stream, and the
fact that we passed said messages to Fprintf.  We were careful in
all other layers to use the message on the "right hand side" of
any *f calls, but in this instance, we used Fprintf and passed the
message on the "left hand side", triggering formatting.  It turns
out we've already formatted everything by the time we get here,
so there's no need -- we can just use Fprint instead.
2017-10-26 10:30:30 -07:00
Chris Smith
95062100f7 Enable pulumi update to target the Console (#461)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).

As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.

We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow. 

Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
2017-10-25 10:46:05 -07:00
Matt Ellis
ade366544e Encrypt secrets in Pulumi.yaml
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.

The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).

Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).

In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.

Secure values show up as unecrypted config values inside the language
hosts and providers.
2017-10-24 16:48:12 -07:00
Pat Gavlin
76f6aceb29 Merge pull request #462 from pulumi/NoSignalTrap
Do not trap signals in `rpcutil.Serve`.
2017-10-24 15:01:22 -07:00
pat@pulumi.com
ce18c8293b Do not trap signals in rpcutil.Serve.
Trapping these signals hijacks the usual termination behavior for any
program that happens to link in the engine and perform an operation
that starts a gRPC server. These servers already provide a cancellation
mechanism via a `cancel` channel parameter; if the using program wants
to gracefully terminate these servers on some signal, it is responsible
for providing that behavior.

This also fixes a leak in which the goroutine responsible for waiting on
a server's signals and cancellation channel would never exit.
2017-10-24 14:35:59 -07:00
Matt Ellis
9840bf4ec8 Merge pull request #460 from pulumi/fix-453
Retain historical checkpoints
2017-10-24 12:31:31 -07:00
Matt Ellis
25b8111eea Retain historical checkpoints
When `PULUMI_RETAIN_CHECKPOINTS` is set in the environment, also write
the checkpoint file to <stack-name>.<ext>.<timestamp>.

This ensures we have historical information about every snapshot, which
would aid in debugging issues like #451. We set this to true for our
integration tests.

Fixes #453
2017-10-24 11:48:33 -07:00
joeduffy
3d3f778c3d Fix asset bugs; write more tests
This change fixes a couple bugs with assets:

* We weren't recursing into subdirectories in the new "path as
  archive" feature, which meant we missed most of the files.

* We need to make paths relative to the root of the archive
  directory itself, otherwise paths end up redundantly including
  the asset's root folder path.

* We need to clean the file paths before adding them to the
  archive asset map, otherwise they are inconsistent between the
  path, tar, tgz, and zip cases.

* Ignore directories when traversing zips, since they aren't
  included in the other formats.

* Tolerate io.EOF errors when reading the ZIP contents into blobs.

* Add test cases for the four different archive kinds.

This fixes pulumi/pulumi-aws#50.
2017-10-24 09:00:11 -07:00
Chris Smith
ede1595a6a Add more context information to assert. (#449) 2017-10-24 08:25:39 -07:00
joeduffy
c61bce3e41 Permit undefined in more places
The prior code was a little too aggressive in rejected undefined
properties, because it assumed any occurrence indicated a resource
that was unavailable due to planning.  This is a by-produt of our
relatively recent decision to flow undefineds freely during planning.

The problem is, it's entirely legitimate to have undefined values
deep down in JavaScript structures, entirely unrelated to resources
whose property values are unknown due to planning.

This change flows undefined more freely.  There really are no
negative consequences of doing so, and avoids hitting some overly
aggressive assertion failures in some important scenarios.  Ideally
we would have a way to know statically whether something is a resource
property, and tighten up the assertions just to catch possible bugs
in the system, but because this is JavaScript, and all the assertions
are happening at runtime, we simply lack the necessary metadata to do so.
2017-10-23 16:02:28 -07:00
Pat Gavlin
236fd6bf76 Merge pull request #448 from pulumi/TwoPhaseSnapshot
Add two-phase snapshotting.
2017-10-23 10:27:30 -07:00
Pat Gavlin
cdbcc394dd PR feedback. 2017-10-23 10:11:09 -07:00
joeduffy
0248b6c309 Fix an err shadowing lint error 2017-10-23 05:55:34 -07:00
joeduffy
a7d99a0c80 Preserve Pulumi.yaml while applying edits
Now that config is stored in Pulumi.yaml, we need to mimic the behavior
around .pulumi/ while edits are applied.  This will ensure that config
values carry forward from the original program settings.

This fixes pulumi/pulumi-aws#48.
2017-10-23 05:27:26 -07:00
joeduffy
c6ee323ce1 Add some "still running..." messages to Pulumi tests 2017-10-22 18:54:29 -07:00
joeduffy
d20f043a3e Fix a few SHA1 comment typos (should be SHA256) 2017-10-22 18:30:42 -07:00
Joe Duffy
4a493292b1 Tolerate missing hashes 2017-10-22 15:54:44 -07:00
joeduffy
3d9dcb0942 Break the diag goroutine upon exit 2017-10-22 15:52:00 -07:00
joeduffy
500ea0b572 Fix diag channel errors
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
2017-10-22 15:22:15 -07:00
Joe Duffy
69f7f51375 Many asset improvements
This improves a few things about assets:

* Compute and store hashes as input properties, so that changes on
  disk are recognized and trigger updates (pulumi/pulumi#153).

* Issue explicit and prompt diagnostics when an asset is missing or
  of an unexpected kind, rather than failing late (pulumi/pulumi#156).

* Permit raw directories to be passed as archives, in addition to
  archive formats like tar, zip, etc. (pulumi/pulumi#240).

* Permit not only assets as elements of an archive's member list, but
  also other archives themselves (pulumi/pulumi#280).
2017-10-22 13:39:21 -07:00
Pat Gavlin
d22a42858f Add two-phase snapshotting.
The existing `SnapshotProvider` interface does not sufficiently lend
itself to reliable persistence of snapshot data. For example, consider
the following:
- The deployment engine creates a resource
- The snapshot provider fails to save the updated snapshot
In this scenario, we have no mechanism by which we can discover that the
existing snapshot (if any) does not reflect the actual state of the
resources managed by the stack, and future updates may operate
incorrectly. To address this, these changes split snapshotting into two
phases: the `Begin` phase and the `End` phase. A provider that needs to
be robust against the scenario described above (or any other scenario
that allows for a mutation to the state of the stack that is not
persisted) can use the `Begin` phase to persist the fact that there are
outstanding mutations to the stack. It would then use the `End` phase to
persist the updated snapshot and indicate that the mutation is no longer
outstanding. These steps are somewhat analogous to the prepare and
commit phases of two-phase commit.
2017-10-21 09:31:01 -07:00
joeduffy
37c7a955d7 Optionally emit stack traces for errors
If --logtostderr is passed, and an unhandled error occurs that
was produced by the github.com/pkg/errors package, we will now
emit the stack trace.  Much easier for debugging purposes.
2017-10-20 19:26:18 -07:00
joeduffy
9e20f15adf Fix CLI hangs when errors occur
The change to use a Goroutine for pumping output causes a hang
when an error occurs.  This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur.  This changes the logic to only wait
on the channel if we successfully began the operation in question.
2017-10-20 17:28:35 -07:00
Matt Ellis
a749ac1102 Use go-yaml directly
Instead of doing the logic to see if a type has YAML tags and then
dispatching based on that to use either the direct go-yaml marshaller
or the one that works in terms of JSON tags, let's just say that we
always add YAML tags as well, and use go-yaml directly.
2017-10-20 14:01:37 -07:00
Matt Ellis
78dc657dbb Fix whitespace issues 2017-10-20 13:30:07 -07:00
Matt Ellis
e361098941 Support global configuration
Previously, config information was stored per stack. With this change,
we now allow config values which apply to every stack a program may
target.

When passed without the `-s <stack>` argument, `pulumi config`
operates on the "global" configuration. Stack specific information can
be modified by passing an explicit stack.

Stack specific configuration overwrites global configuration.

Conside the following Pulumi.yaml:

```
name: hello-world
runtime: nodejs
description: a hello world program
config:
  hello-world:config:message Hello, from Pulumi
stacks:
  production:
    config:
      hello-world:config:message Hello, from Production
```

This program contains a single configuration value,
"hello-world:config:message" which has the value "Hello, from Pulumi"
when the program is activated into any stack except for "production"
where the value is "Hello, from Production".
2017-10-20 13:30:07 -07:00
Matt Ellis
9cf9428638 Save config information in Pulumi.yaml
Instead of having information stored in the checkpoint file, save it
in the Pulumi.yaml file. We introduce a new section `stacks` which
holds information specific to a stack.

Next, we'll support adding configuration information that applies
to *all* stacks for a Program and allow the stack specific config to
overwrite or augment it.
2017-10-20 13:30:07 -07:00
Matt Ellis
906f191e45 Use go-yaml when marshalled type has yaml tags
By using go-yaml directly, the properties in the document will match
the order of the fields in the coresponding go type.
2017-10-20 13:23:31 -07:00
Matt Ellis
9994c9c7b9 Add yaml tags to pack.Package
This will allow us to marhsall and unmarshall the structure using the
go-yaml package directly, instead of the current way we handle YAML
which is to treat it as JSON and use the JSON marshaller behind the
scenes.

By having the explicit tags and using go-yaml directly, we can ensure
the resulting order of the resulting document's properties matches the
go type instead of just being lexigraphicaly sorted.

Long term, we probably want to stop using go-yaml or other packages in
favor of parsing the YAML file into some sort of DOM that we can use
to retain comments and other formatting in the file.
2017-10-20 13:23:31 -07:00
Matt Ellis
c856c5487d Add Load and Save to pack.Package and adopt them 2017-10-20 13:23:31 -07:00
Chris Smith
d5846d7e16 Add login and logout commands. (#437)
This PR adds `login` and `logout` commands to the `pulumi` CLI.

Rather than requiring a user name and password like before, we instead require users to login with GitHub credentials on the Pulumi Console website. (You can do this now via https://beta.moolumi.io.) Once there, the account page will show you an "access token" you can use to authenticate against the CLI.

Upon successful login, the user's credentials will be stored in `~/.pulumi/credentials.json`. This credentials file will be automatically read with the credentials added to every call to `PulumiRESTCall`.
2017-10-19 15:22:07 -07:00
Pat Gavlin
bc4f0b1935 Merge pull request #440 from pulumi/TickDeleteInPreview
Set `old.Delete` when previewing a `CreateReplace` step.
2017-10-19 13:34:55 -07:00
pat@pulumi.com
20e71fa5c4 Set old.Delete when previewing a CreateReplace step.
This is required to prevent an assertion when skipping a `Delete` step.
2017-10-19 13:08:17 -07:00
pat@pulumi.com
01ad962935 Save snapshots after each step.
We should probably be more clever about this in the future (i.e. report
only the deltas rather than the entire snapshot).
2017-10-19 10:57:48 -07:00
Pat Gavlin
9895e8006f Merge pull request #434 from pulumi/PendingDeletes
Track resources that are pending deletion in checkpoints.
2017-10-19 10:57:00 -07:00
pat@pulumi.com
23864b9459 PR feedback. 2017-10-19 10:34:23 -07:00
joeduffy
599ca8ea43 Add accessors to fetch the Pulumi project and stack names
This change adds functions, `pulumi.getProject()` and `pulumi.getStack()`,
to fetch the names of the project and stack, respectively.  These can be
handy in generating names, specializing areas of the code, etc.

This fixes pulumi/pulumi#429.
2017-10-19 08:26:57 -07:00
pat@pulumi.com
6b66437fae Track resources that are pending deletion in checkpoints.
During the course of a `pulumi update`, it is possible for a resource to
become slated for deletion. In the case that this deletion is part of a
replacement, another resource with the same URN as the to-be-deleted
resource will have been created earlier. If the `update` fails after the
replacement resource is created but before the original resource has been
deleted, the snapshot must capture that the original resource still exists
and should be deleted in a future update without losing track of the order
in which the deletion must occur relative to other deletes. Currently, we
are unable to track this information because the our checkpoints require
that no two resources have the same URN.

To fix this, these changes introduce to the update engine the notion of a
resource that is pending deletion and change checkpoint serialization to
use an array of resources rather than a map. The meaning of the former is
straightforward: a resource that is pending deletion should be deleted
during the next update.

This is a fairly major breaking change to our checkpoint files, as the
map of resources is no more. Happily, though, it makes our checkpoint
files a bit more "obvious" to any tooling that might want to grovel
or rewrite them.

Fixes #432, #387.
2017-10-18 17:09:00 -07:00
Matt Ellis
b04732f76b Rename env folder to stacks 2017-10-17 17:44:53 -07:00
Pat Gavlin
dacff3db48 Merge pull request #425 from pulumi/DrainStreamsPluginLoadFailure
Drain std{out,err} when a plugin fails to load.
2017-10-16 23:32:02 -07:00
pat@pulumi.com
9453f86c2e Implement dynamic resources.
A dynamic resource is a resource whose provider is implemented alongside
the resource itself. This provider may close over and use orther
resources in the implementation of its CRUD operations. The provider
itself must be stateless, as each CRUD operation for a particular
dynamic resource type may use an independent instance of the provider.
Changes to the definition of a resource's provider result in replacement
of the resource itself (rather than a simple update), as this allows the
old provider definition to delete the old resource and the new provider
definition to create an appropriate replacement.
2017-10-16 23:06:53 -07:00
Pat Gavlin
546612a354 Drain std{out,err} when a plugin fails to load.
If a plugin fails to load after we've set up the goroutines that copy
from its std{out,err} streams, then those goroutines can end up writing
to a closed event channel. This change ensures that we properly drain
those streams in this case.
2017-10-16 21:38:11 -07:00
Matt Ellis
8fda24a776 Write a .yarnrc file when running tests to enable mutex
In #411 we started to run tests in parallel again. To support that, we
added a .yarnrc file in the root of our repository to pass --mutex
network to all yarn invocations, because tests may run yarn commands
concurrently with one another and yarn is not safe to run
concurrently.

However, when we run the integration tests, we actually copy them into
a folder under `/tmp` and so yarn's logic to walk up the directory to
tree to find a `.yarnrc` will not see this `.yarnrc` and we're back
where we started.

Have the testing package explicitly write this file, which should
prevent this issue from happening in the future.
2017-10-16 16:08:50 -07:00
Matt Ellis
996ac8872d Merge pull request #420 from pulumi/rename-env-to-stack
Use `Stack` over `Environment` to describe a deployment target
2017-10-16 14:24:11 -07:00
pat@pulumi.com
bdbb1b59d4 Ensure a plugin's std{out,err} streams are drained in Close().
Not doing so can cause panics, as the goroutines we use to copy these
streams can end up writing to a closed channel.
2017-10-16 13:44:37 -07:00
Matt Ellis
22c9e0471c Use Stack over Environment to describe a deployment target
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.

From a user's point of view, there are a few changes:

1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).

On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
2017-10-16 13:04:20 -07:00
joeduffy
301739c6b5 Add auto-parenting
This changes a few things about "components":

* Rename what was previously ExternalResource to CustomResource,
  and all of the related fields and parameters that this implies.
  This just seems like a much nicer and expected name for what
  these represent.  I realize I am stealing a name we had thought
  about using elsewhere, but this seems like an appropriate use.

* Introduce ComponentResource, to make initializing resources
  that merely aggregate other resources easier to do correctly.

* Add a withParent and parentScope concept to Resource, to make
  allocating children less error-prone.  Now there's no need to
  explicitly adopt children as they are allocated; instead, any
  children allocated as part of the withParent callback will
  auto-parent to the resource provided.  This is used by
  ComponentResource's initialization function to make initialization
  easier, including the distinction between inputs and outputs.
2017-10-15 04:38:26 -07:00
joeduffy
fbfca58a3f Implement components
This change implements core support for "components" in the Pulumi
Fabric.  This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.

In a nutshell, resources no longer imply external providers.  It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.

For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.

To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.

All resources now have the ability to adopt children.  This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).

Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output.  For instance:

    + aws:serverless:Function: (create)
      [urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
      => urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
      => urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
      => urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda

The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 18:30:59 -07:00
Pat Gavlin
aaf46c12f4 Narrow destroy events as well. 2017-10-13 10:12:09 -07:00
pat@pulumi.com
107f667b87 Accept a send-only event channel in Deploy and Preview.
Just what it says on the tin.
2017-10-12 14:16:44 -07:00
Matt Ellis
2676e8bad1 Split apart EnvironmentProvider interface 2017-10-11 13:23:44 -07:00
Matt Ellis
6f4537010c Fix typo 2017-10-10 12:16:41 -07:00
Matt Ellis
377eb61e32 Always emit debug events into the stream 2017-10-09 18:27:05 -07:00
Matt Ellis
7587bcd7ec Have engine emit "events" instead of writing to streams
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).

While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.

As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).

The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
2017-10-09 18:24:56 -07:00
Matt Ellis
5fd0ada303 Remove Checkpoint return value from GetEnvironment 2017-10-09 18:21:55 -07:00
Matt Ellis
e7e4e75af3 Don't examine the Checkpoint in the CLI
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
2017-10-09 18:21:55 -07:00
Matt Ellis
6e8185884e Remove GetEnvironmentInfo from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
7e4a1f515b Remove GetEnvironments from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
bd92f8eaed Remove RemoveEnv from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
7fdbdb2152 Remove InitEnv from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
76663d30fa Remove SetConfig from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
02a33a4384 Remove DeleteConfig from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
242eb929fb Remove GetConfiguration from Engine 2017-10-09 18:21:55 -07:00
Matt Ellis
acd35b9916 Remove Checkpoint from planContext
Nothing actually consumed the Checkpoint property of the planContext,
so just remove it.
2017-10-09 18:21:55 -07:00
Matt Ellis
6bab1dbad4 Pass $(YARNFLAGS) to all yarn invocations
This let's you set things like YARNFLAGS==--offline which is helpful
when you are on an airplane. Yarn can still pick up stuff that you had
pulled down recently from its local cache
2017-10-09 18:21:55 -07:00
joeduffy
7e30dde8f4 Update plan test to new interface 2017-10-04 08:30:50 -04:00
joeduffy
b7576b9b14 Add a notion of stable properties
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change.  As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.

This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name.  Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.

This resolves pulumi/pulumi#330.
2017-10-04 08:22:21 -04:00
pat@pulumi.com
3121284117 Remove incremental checkpointing.
This is currently broken in the case of resource replacements, in which
case we may have multiple resources with the same URN.
2017-10-03 16:21:53 -07:00
pat@pulumi.com
f78aa02a54 Report SaveEnvironment errors.
Furthermore, only save the environment in `deployActions.Run`. Fixes #388.
2017-10-03 15:18:08 -07:00
pat@pulumi.com
15be0a8b83 Simplify deploy.StepActions.
Now that `deploy.Step.Pre` is no more, we can simplify the `StepActions`
interface down to a single method, `Run`, which performs all actions
associated with the step. This feeds into #388.
2017-10-03 11:10:06 -07:00
pat@pulumi.com
252fd8e6bb Remove deploy.Step.Pre.
As per @joeduffy, this is an artifact of the prior runtime model.
2017-10-03 10:03:05 -07:00
Matt Ellis
daa083636c Rename envCmdInfo to planContext
This name is still not great, but the old name was really, really bad.
2017-10-02 18:03:07 -07:00
Pat Gavlin
5f203e7f15 Merge pull request #384 from pulumi/IncrementalCheckpoint
Incrementally update the checkpoint file during `pulumi update`.
2017-10-02 17:25:24 -07:00
pat@pulumi.com
967414117a PR feedback 2017-10-02 17:18:16 -07:00
Matt Ellis
93ab134bbb Have the CLI keep track of the current environment
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)
2017-10-02 16:57:41 -07:00
Matt Ellis
ae8f2a84c2 Merge pull request #385 from pulumi/pulumi-service-interface
More pulumi engine refactoring
2017-10-02 16:16:55 -07:00
pat@pulumi.com
11114bd243 Incrementally update the checkpoint file during pulumi update.
Simply write the current snapshot after each step. We should probably be
more clever about this in the future (i.e. write only the changes rather
than the entire file).
2017-10-02 15:53:13 -07:00
Matt Ellis
2a16a198f6 Move environment info printing into the CLI
The engine now provides a strongly typed view of an environment and
the CLI decides how to display it
2017-10-02 15:50:08 -07:00
Matt Ellis
a704750714 Remove Workspace dependency on diag.Sink
It was only being used for two cases where we would issue warnings for
cases where the file system casing did not match expected casing. I
think it's probably better if we don't try to be smart here and just
treat these cases the same as if the file had not existed. Removing
the dependncy on diag also makes it a little clearer that this stuff
should be pulled out from the engine.
2017-10-02 15:25:22 -07:00
Matt Ellis
d29f6fc4e5 Use tokens.QName instead of string as the type for environment
Internally, the engine deals with tokens.QName and not raw
strings. Push that up to the API boundary
2017-10-02 15:14:55 -07:00
Pat Gavlin
ff2a3fa242 Replace Plan.Apply with planResult.Walk. (#383)
`deploy.Plan.Apply` was only consumed by the engine, and seemed to be in
the wrong place given the API exported by the rest of `Plan` (i.e.
`Plan.Start` + `PlanIterator`). Furthermore, we were missing a reasonable
opportunity to share code between `update` and `preview`, both of which
need to walk the plan. These changes move the plan walk into `package engine`
as `planResult.Walk` and replace the `Progress` interface with a new interface,
`StepActions`, which subsumes the functionality of the former and adds support
for implementation-specific step execution. `planResult.Walk` is then
consumed by both `Engine.Deploy` and `Engine.PrintPlan`.
2017-10-02 14:26:51 -07:00
Matt Ellis
aa6c6d6617 Move some configuration logic into the CLI
The CLI is now responsible for actually displaying information and the
engine is only concerned with getting the configuration. As part of
this change, I've removed the display a single configuration value API
from the engine. It can now be done in terms of getting all the config
for an environment and selecting the value the user is interested in
2017-10-02 13:35:39 -07:00
Matt Ellis
7900e2edb1 Move environment printing back into the CLI
Previously the engine was concerned with displaying information about
the environment. Now the engine returns an environment info object
which the CLI uses to display environment information.
2017-10-02 13:34:33 -07:00
Matt Ellis
c022db9285 Require environment name on deployment APIs
Deployments always need to be done in the context of some environment,
so let's make the argument explicit instead of it coming in the
property bag
2017-10-02 11:14:27 -07:00
joeduffy
ac2dbc80fa Add an Invoke RPC method on ResourceProvider
This change enables us to make progress on exposing data sources
(see pulumi/pulumi-terraform#29).  The idea is to have an Invoke
function that simply takes a function token and arguments, performs
the function lookup and invocation, and then returns a return value.
2017-09-30 14:53:27 -04:00
pat@pulumi.com
28d02e232b Fix #372.
Print "modified" rather than "modifyd". This introduces a new method,
`resource.StepOp.PastTense()`, which returns the past tense description
of the operation.
2017-09-28 09:28:36 -07:00
Cyrus Najmabadi
bdd819379e Extract out test helper functionality for use from pulumi-cloud. 2017-09-25 14:14:09 -07:00
Joe Duffy
b4646db39b Merge branch 'master' into RenameVerbs 2017-09-23 11:31:29 -07:00
joeduffy
6c08f59b9f Pretty print deployment info map
The current `pulumi env` command just prints the deployment's info map
as-is, which leads to some ugly Go internal map printing output.  Instead
of doing that, let's show a JSON-like map that is properly pretty printed.
2017-09-23 05:54:00 -07:00
joeduffy
141a112950 Improve output formatting
This change improves our output formatting by generally adding
fewer prefixes.  As shown in pulumi/pulumi#359, we were being
excessively verbose in many places, including prefixing every
console.out with "langhost[nodejs].stdout: ", displaying full
stack traces for simple errors like missing configuration, etc.

Overall, this change includes the following:

* Don't prefix stdout and stderr output from the program, other
  than the standard "info:" prefix.  I experimented with various
  schemes here, but they all felt gratuitous.  Simply emitting
  the output seems fine, especially as it's closer to what would
  happen if you just ran the program under node.

* Do NOT make writes to stderr fail the plan/deploy.  Previously
  we assumed that any console.errors, for instance, meant that
  the overall program should fail.  This simply isn't how stderr
  is treated generally and meant you couldn't use certain
  logging techniques and libraries, among other things.

* Do make sure that stderr writes in the program end up going to
  stderr in the Pulumi CLI output, however, so that redirection
  works as it should.  This required a new Infoerr log level.

* Make a small fix to the planning logic so we don't attempt to
  print the summary if an error occurs.

* Finally, add a new error type, RunError, that when thrown and
  uncaught does not result in a full stack trace being printed.
  Anyone can use this, however, we currently use it for config
  errors so that we can terminate with a pretty error message,
  rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 05:20:11 -07:00
pat@pulumi.com
69341fa7c8 push is dead; long live update.
After discussion with Joe and Luke, we've decided to use `update` instead
of `push` as it more intuitively fits the operation being performed.
2017-09-22 17:23:40 -07:00
pat@pulumi.com
597db186ec Renames: plan -> preview, deploy -> push.
Part of #353.

These changes also remove all command aliases from the `pulumi` command.
2017-09-22 15:28:03 -07:00