Commit graph

1934 commits

Author SHA1 Message Date
joeduffy
86f97de7eb Merge root stack changes with parenting 2017-11-26 08:14:01 -08:00
joeduffy
a2ae4accf4 Switch to parent pointers; display components nicely
This change switches from child lists to parent pointers, in the
way resource ancestries are represented.  This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).

This lets us show a more parent/child display in the output when
doing planning and updating.  For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:

    * cloud:timer:Timer: (same)
          [urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
        * cloud:function:Function: (same)
              [urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
            * aws:serverless:Function: (same)
                  [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
                ~ aws:lambda/function:Function: (modify)
                      [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
                      [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
                    - code            : archive(assets:2092f44) {
                        // etc etc etc

Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.

I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).

Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
2017-11-26 08:14:01 -08:00
Pat Gavlin
be8d604c96
Merge pull request #603 from pulumi/DebugUpdates
Optionally enable debug output in integration tests.
2017-11-24 17:00:45 -08:00
Pat Gavlin
ea77c396ff
Merge pull request #604 from pulumi/NoLint
Add a few `gas` exceptions.
2017-11-24 16:28:53 -08:00
Pat Gavlin
d72b85c90b Add a few gas exceptions.
The first exception relates to how we launch plugins. Plugin paths are
calculated using a well-known set of rules; this makes `gas` suspicious
due to the need to use a variable to store the path of the plugin.

The second and third are in test code and aren't terribly concerning.
The latter exception asks `gas` to ignore the access key we hard-code
into the integration tests for our Pulumi test account.

The fourth exception allows use to use more permissive permissions for
the `.pulumi` directory than `gas` would prefer. We use `755`; `gas`
wants `700` or stricter. `755` is the default for `mkdir` and `.git` and
so seems like a reasonable choice for us.
2017-11-24 16:14:43 -08:00
Pat Gavlin
503e920198 Optionally enable debug output in integration tests.
By default, debugging events are not displayed by `pulumi`; the `-d`
flag must be provided to enable their display. This is necessary e.g.
to view debugging output from Terraform when TF logging is enabled.
These changes add the option to display debug output from `pulumi` to
the integration test framework.

These changes also contain a small fix for the display of component
children.
2017-11-24 15:21:43 -08:00
Matt Ellis
2f985de9e4 Add developer only pulumi archive command
When working on the `.pulumiignore` support, I added a small command
that would just dump the file we sent to the service (without actually
talking to the service) so I could do some measurements on archive
sizes.

I mentioned this to Chris who said he had hacked up a similar thing
for something he was working on, so it felt worthwhile to check this
command in (hidden behind a flag) to share the implementation.

This command is hidden behind an environment variable, so customers
will not see it unless they opt in.
2017-11-21 16:02:34 -08:00
Matt Ellis
f953794363 Support .pulumiignore
When deploying a project via the Pulumi.com service, we have to upload
the entire "context" of your project to Pulumi.com. The context of the
program is all files in the directory tree rooted by the `Pulumi.yaml`
file, which will often contain stuff we don't want to upload, but
previously we had no control over what would be updated (and so folks
would do hacky things like delete folders before running `pulumi
update`).

This change adds support for `.pulumiignore` files which should behave
like `.gitignore`. In addition, we were not previously compressing
files when we added them to the zip archive we uploaded and now.

By default, every .pulumiignore file is treated as if it had an
exclusion for `.git/` at the top of the file (users can override this
by adding an explicit `!.git/` to their file) since it is very
unlikely for there to ever be a reason to upload the .git folder to
the service.

Fixes pulumi/pulumi-service#122
2017-11-21 12:09:18 -08:00
Matt Ellis
e9d13dd40f Fix coverage build 2017-11-20 11:49:44 -08:00
CyrusNajmabadi
269004afb4
Show a nicer diff of our serialized functions when doing a 'pulumi update'
Also refactor and clean up a lot of the diff printing code.
2017-11-20 11:39:49 -08:00
joeduffy
8e01135572 Log the project and stack names 2017-11-19 10:16:47 -08:00
joeduffy
a591775409 Add a missing copyright header 2017-11-19 08:08:30 -08:00
joeduffy
5c4c7064ba Fix a hanging ":" in the output 2017-11-19 08:08:30 -08:00
Matt Ellis
9ee15c277f Fix publishing scripts to include package.json
This is needed because install.sh will run yarn install at install
time, so we need the dependency information this provides.
2017-11-17 15:46:48 -08:00
Luke Hoban
96e4b74b15
Support for stack outputs (#581)
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.

This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs.  That resources has properties for each top-level export in the Node.js script.

Running `pulumi stack` will display the current value of these outputs.
2017-11-17 15:22:41 -08:00
joeduffy
d840a86a0a Fix outdated config error message 2017-11-17 08:53:58 -08:00
Matt Ellis
46c35281ab Adopt new makefile system
See https://github.com/pulumi/home/pull/56 for more details.
2017-11-16 23:56:29 -08:00
Joe Duffy
df7114aca2
Merge pull request #578 from pulumi/FixSnapshot
Fix plan snapshotting.
2017-11-16 13:11:58 -08:00
Matt Ellis
5fc226a952 Change configuration verbs for getting and setting values
A handful of UX improvments for config:

 - `pulumi config ls` has been removed. Now, `pulumi config` with no
   arguments prints the table of configuration values for a stack and
   a new command `pulumi config get <key>` prints the value for a
   single configuration key (useful for scripting).
 - `pulumi config text` and `pulumi config secret` have been merged
   into a single command `pulumi config set`. The flag `--secret` can
   be used to encrypt the value we store (like `pulumi config secret`
   used to do).
 - To make it obvious that setting a value with `pulumi config set` is
   in plan text, we now echo a message back to the user saying we
   added the configuration value in plaintext.

Fixes #552
2017-11-16 11:39:28 -08:00
Joe Duffy
77460a7dc0
Plumb the project name correctly (#583)
This change fixes getProject to return the project name, as
originally intended.  (One line was missing.)

It also adds an integration test for this.

Fixes pulumi/pulumi#580.
2017-11-16 08:15:56 -08:00
Joe Duffy
98ef0c4bb5
Allow overriding a Pulumi.yaml's entrypoint (#582)
Because the Pulumi.yaml file demarcates the boundary used when
uploading a program to the Pulumi.com service at the moment, we
have trouble when a Pulumi program uses "up and over" references.
For instance, our customer wants to build a Dockerfile located
in some relative path, such as `../../elsewhere/`.

To support this, we will allow the Pulumi.yaml file to live
somewhere other than the main Pulumi entrypoint.  For example,
it can live at the root of the repo, while the Pulumi program
lives in, say, `infra/`:

    Pulumi.yaml:
    name: as-before
    main: infra/

This fixes pulumi/pulumi#575.  Further work can be done here to
provide even more flexibility; see pulumi/pulumi#574.
2017-11-16 07:49:07 -08:00
pat@pulumi.com
1d9fa045cb Fix plan snapshotting.
When producing a snapshot for a plan, we have two resource DAGs. One of
these is the base DAG for the plan; the other is the current DAG for the
plan. Any resource r may be present in both DAGs. In order to produce a
snapshot, we need to merge these DAGs such that all resource
dependencies are correctly preserved. Conceptually, the merge proceeds
as follows:

- Begin with an empty merged DAG.
- For each resource r in the current DAG, insert r and its outgoing
  edges into the merged DAG.
- For each resource r in the base DAG:
    - If r is in the merged DAG, we are done: if the resource is in the
      merged DAG, it must have been in the current DAG, which accurately
      captures its current dependencies.
    - If r is not in the merged DAG, insert it and its outgoing edges
      into the merged DAG.

Physically, however, each DAG is represented as list of resources
without explicit dependency edges. In place of edges, it is assumed that
the list represents a valid topological sort of its source DAG. Thus,
any resource r at index i in a list L must be assumed to be dependent on
all resources in L with index j s.t. j < i. Due to this representation,
we implement the algorithm above as follows to produce a merged list
that represents a valid topological sort of the merged DAG:

- Begin with an empty merged list.
- For each resource r in the current list, append r to the merged list.
  r must be in a correct location in the merged list, as its position
  relative to its assumed dependencies has not changed.
- For each resource r in the base list:
    - If r is in the merged list, we are done by the logic given in the
      original algorithm.
    - If r is not in the merged list, append r to the merged list. r
      must be in a correct location in the merged list:
        - If any of r's dependencies were in the current list, they must
          already be in the merged list and their relative order w.r.t.
          r has not changed.
        - If any of r's dependencies were not in the current list, they
          must already be in the merged list, as they would have been
          appended to the list before r.

Prior to these changes, we had been performing these operations in
reverse order: we would start by appending any resources in the old list
that were not in the new list, then append the whole of the new list.
This caused out-of-order resources when a program that produced pending
deletions failed to run to completion.

Fixes #572.
2017-11-15 16:21:42 -08:00
Chris Smith
84cd810112
Move program uploads to the CLI (#571)
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)

Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.

The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.

The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
2017-11-15 13:27:28 -08:00
Matt Ellis
1a1c44b71e Update travis webhook 2017-11-15 11:42:26 -08:00
CyrusNajmabadi
36a692390d
Properly capture 'arguments' when creating our serialization closure. (#569)
* Simplify how we capture 'this' in our serialization logic.
* Properly capture 'arguments'

* add tests for 'arguments' capture.

* Properly serialize out 'arguments'
* Invert 'with' and function closure.
2017-11-15 11:31:17 -08:00
Luke Hoban
4730ab5a0b
Add integration test support for relative work dir (#567)
Adds the ability to use a working directory nested within an example folder.
2017-11-14 21:02:47 -08:00
Pat Gavlin
9c42789359
Merge pull request #562 from pulumi/RawDiagMessages
Stop formatting output that should be raw.
2017-11-14 13:20:57 -08:00
Matt Ellis
f5dc3a2b53 Add a very barebones pulumi logs command
This is the smallest possible thing that could work for both the local
development case and the case where we are targeting the Pulumi
Service.

Pulling down the pulumiframework and component packages here is a bit
of a feel bad, but we plan to rework the model soon enough to a
provider model which will remove the need for us to hold on to this
code (and will bring back the factoring where the CLI does not have
baked in knowledge of the Pulumi Cloud Framework).

Fixes #527
2017-11-14 12:26:55 -08:00
Pat Gavlin
234f0816e5 Stop formatting output that should be raw.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.

The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.

Fixes #551.
2017-11-14 11:26:41 -08:00
Joe Duffy
571d3814f8
Format logs the same way Node.js console APIs do (#561)
This change formats log messages the same way that Node.js does
in its console.log/error APIs.  This ensures, for example, that
errors have their stack printed if present, and switches over to
just printing the error directly rather than manually toStringing it.
2017-11-14 09:55:54 -08:00
Luke Hoban
a350b1654f
Report test run data and timing to S3 (#560)
Add the ability to upload data and timing for test runs to S3. Uploaded data is designed to be queried via a service like AWS Athena and these queries can then be imported into BI tools (Excel, QuickSight, PowerBI, etc.)

Initially hook this up to the `minimal` test as a baseline.
2017-11-14 09:33:22 -08:00
Chris Smith
f74d418b99
Canonicalize where we use the Pulumi API (#556)
Canonicalize the value we get from `PULUMI_API`. Fixes #555 .
2017-11-13 18:25:57 -08:00
joeduffy
d39be9e55b Add "kill" as a "destroy" SuggestFor
Per @ericr's suggestion:
59de933445 (commitcomment-25593386)
2017-11-13 17:27:45 -08:00
Joe Duffy
e429c7523e
Add a handful of aliases/suggestions (#557) 2017-11-13 14:59:11 -08:00
Matt Ellis
8e4b03cfac Point at our build result filtering webhook 2017-11-13 12:05:32 -08:00
Pat Gavlin
28579eba94
Rework asset identity and exposure of old assets. (#548)
Note: for the purposes of this discussion, archives will be treated as
assets, as their differences are not particularly meaningful.

Currently, the identity of an asset is derived from the hash and the
location of its contents (i.e. two assets are equal iff their contents
have the same hash and the same path/URI/inline value). This means that
changing the source of an asset will cause the engine to detect a
difference in the asset even if the source's contents are identical. At
best, this leads to inefficiencies such as unnecessary updates. This
commit changes asset identity so that it is derived solely from an
asset's hash. The source of an asset's contents is no longer part of
the asset's identity, and need only be provided if the contents
themselves may need to be available (e.g. if a hash does not yet exist
for the asset or if the asset's contents might be needed for an update).

This commit also changes the way old assets are exposed to providers.
Currently, an old asset is exposed as both its hash and its contents.
This allows providers to take a dependency on the contents of an old
asset being available, even though this is not an invariant of the
system. These changes remove the contents of old assets from their
serialized form when they are passed to providers, eliminating the
ability of a provider to take such a dependency. In combination with the
changes to asset identity, this allows a provider to detect changes to
an asset simply by comparing its old and new hashes.

This is half of the fix for [pulumi/pulumi-cloud#158]. The other half
involves changes in [pulumi/pulumi-terraform].
2017-11-12 11:45:13 -08:00
Pat Gavlin
eec99c3ca9
Merge pull request #547 from pulumi/AssetBugs
Fix an archive-related bug.
2017-11-10 20:42:33 -08:00
pat@pulumi.com
8c7932c1b5 Fix an archive-related bug.
Properly skip the .pulumi when dealing with directory-backed archives.
2017-11-10 19:56:25 -08:00
Pat Gavlin
f2e658e1b7
Merge pull request #546 from pulumi/TarArchiveError
Log actual and expected sizes on ErrWriteTooLong.
2017-11-10 12:23:57 -08:00
Pat Gavlin
db2f802d34 Log actual and expected sizes on ErrWriteTooLong.
If a blob's reported size is incorrect, `archiveTar` may attempt to
write more bytes to an entry than it reported in that entry's header.
These changes provide a bit more context with the resulting error as
well as removing an unnecessary `LimitReader`.
2017-11-10 11:46:49 -08:00
Luke Hoban
af5298f4aa
Initial work on tracing support (#521)
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.

We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.

The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.

We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.

In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.

We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.

We currently support sending tracing data to a Zipkin-compatible endpoint.  This has been validated with both Zipkin and Jaeger UIs.

We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface.  There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
2017-11-08 17:08:51 -08:00
pat@pulumi.com
0b6b211e69 Do not delete temp directories during tests.
This interacts badly with the deferred destroy.
2017-11-08 16:51:55 -08:00
joeduffy
4d26cf4f2c Fix a function comment 2017-11-08 16:20:27 -08:00
Pat Gavlin
d01465cf6d
Make archive assets stream their contents. (#542)
We currently have a nasty issue with archive assets wherein they read
their entire contents into memory each time they are accessed (e.g. for
hashing or translation). This interacts badly with scenarios that
place large amounts of data in an archive: aside from limiting the size
of an archive the engine can handle, it also bloats the engine's memory
requirements. This appears to have caused issues when running the PPC in
AWS: evidence suggests that the very high peak memory requirements this
approach implies caused high swap traffic that impacted the service's
availability.

In order to fix this issue, these changes move archives onto a
streaming read model. In order to read an archive, a user:
- Opens the archive with `Archive.Open`. This returns an ArchiveReader.
- Iterates over its contents using `ArchiveReader.Next`. Each returned
  blob must be read in full between successive calls to
  `ArchiveReader.Next`. This requirement is essentially forced upon us
  by the streaming nature of TAR archives.
- Closes the ArchiveReader with `ArchiveReader.Close`.

This model does not require that the complete contents of the archive or
any of its constituent files are in memory at any given time.

Fixes #325.
2017-11-08 15:28:41 -08:00
Joe Duffy
73d627edca
Fix an erroneous error condition
This code was swallowing an error incorrectly, rather than returning
it.  As a result, certain commands would fail with assertions rather
than the intended error message (like trying to config without a stack).
2017-11-08 14:37:01 -08:00
Matt Ellis
33a032b5c2 Trim whitespace around user input before comparing
Doing it this way will make things work on both *nix and Windows

Fixes #534
2017-11-07 14:24:49 -08:00
Matt Ellis
f0d2dfb1c9 Use Getenv instead of LookupEnv 2017-11-06 15:18:35 -08:00
Chris Smith
5085c7eef8
Rework polling (#531)
Rework the polling code for `pulumi` when targeting hosted scenarios. (i.e. absorb https://github.com/pulumi/pulumi-service/pull/282.) We now return an `updateID` for update, preview, and destroy. And use bake that into the URL.

This PR also silently ignores 504 errors which are now returned from the Pulumi Service if the PPC request times out.

Combined, this should ameliorate some of the symptoms we see from https://github.com/pulumi/pulumi-ppc/issues/60. Though we'll continue to try to identify and fix the root cause.
2017-11-06 14:12:20 -08:00
Matt Ellis
fbb3887343
Merge pull request #472 from pulumi/clean-temp-on-test-pass
Integration testing improvements
2017-11-06 12:21:33 -08:00
Matt Ellis
098b2b8401 Remove integration test working folder if tests pass 2017-11-06 11:50:21 -08:00