Commit graph

725 commits

Author SHA1 Message Date
Pat Gavlin cfe4e127be
Add API types for the V3 checkpoint (#2384)
Resources gain two new fields: `PropertyDependencies` and
`PendingReplacement`. The former maps an input property's name to the
dependencies that may affect the value of that property. The latter is
used to track resources that have been deleted as part of a
delete-before-replace operation but have not yet been recreated.

In addition to the new fields, resource properties may now contain
encrypted first-class secret values. These values are of type `SecretV1`,
where the `Sig` field is set to `resource.SecretSig`.

Finally, the deployment type gains a new field, `SecretsProviders`,
which contains any configuration necessary to handle secrets that may be
present in resource properties.
2019-01-23 13:33:25 -08:00
Matt Ellis a02bfb6469 Include symlink'd regular files in directory archives
When constructing an Archive based off a directory path, we would
ignore any symlinks that we saw while walking the file system
collecting files to include in the archive.

A user reported an issue where they were unable to use the
[sharp](https://www.npmjs.com/package/sharp) library from NPM with a
lambda deployed via Pulumi. The problem was that the library includes
native components, and these native components include a bunch of
`*.so` files. As is common, there's a regular file with a name like
`foo.so.1.2.3` and then symlinks to that file with the names
`foo.so.1.2`, `foo.so.1` and `foo.so`. Consumers of these SOs will
try to load the shorter names, i.e. `foo.so` and expect the symlink to
resolve to the actual library.

When these links are not present, upstack code fails to load.

This changes modifies our logic such that if we have a symlink and it
points to a regular file, we include it in the archive. At this time,
we don't try to add it to the archive as a symlink, instead we just
add it as another copy of the regular file. We could explore trying to
include these things as symlinks in archive formats that allow
them (While zip does support this, I'm less sure doing this for zip
files will be a great idea, given the set of tricks we already play to
ensure our zip files are usable across many cloud vendors serverless
offerings, it feels like throwing symlinks into the mix may end up
causing even more downstream weirdness).

We continue to ignore symlinks which point at directories. In
practice, these seem fairly uncommon and doing so lets us not worry
about trying to deal with ensuring we don't end up chasing our tail in
cases where there are circular references.

While this change is in pulumi/pulumi, the downstream resource
providers will need to update their vendored dependencies in order to
pick this up and have it work end to end.

Fixes #2077
2019-01-18 10:35:36 -08:00
Louis DeJardin f35d4cd017 Small typo in comment
`read` spelled `reead`
2019-01-15 15:11:49 -08:00
Pat Gavlin 24f89e1121
Close plugin context on plan creation failure (#2304)
This ensures that the gRPC server is properly shut down. This fixes an
issue in which a resource plugin that is still configuring could report
log messages to the plugin host, which would in turn attempt to send
diagnostic packets over a closed channel, causing a panic.

Fixes #2170.
2018-12-18 13:25:52 -08:00
CyrusNajmabadi ca8169e344
Use 'output<...>' as our terminology for 'computed' properties. (#2267) 2018-12-02 19:44:50 -08:00
Pat Gavlin ab36b1116f
Handle unconfigured plugins in Diff. (#2238)
After #2088, we began calling `Diff` on providers that are not configured
due to unknown configuration values. This hit an assertion intended to
detect exactly this scenario, which was previously unexpected.

These changes adjust `Diff` to indicate that a Diff is unavailable and
return an error message that describes why. The step generator then
interprets the diff as indicating a normal update and issues the error
message to the diagnostic stream.

Fixes #2223.
2018-11-21 16:53:29 -08:00
Matt Ellis 72d52c6e1f Don't fail on configuration keys like a:config:b:c
Configuration keys are simple namespace/name pairs, delimited by
":". For compatability, we also allow
"<namespace>:config:<name>", but we always record the "nice" name in
`Pulumi.<stack-name>.yaml`.

While `pulumi config` and friends would block setting a key like
`a🅱️c` (where the "name" has a colon in it), it would allow
`a:config:b:c`. However, this would be recorded as `a🅱️c` in
`Pulumi.<stack-name>.yaml`, which meant we'd error when parsing the
configuration file later.

To work around this, disallow ":" in the "name" part of a
configuration key.  With this change the following all work:

```
keyName
my-project:keyName
my-project:config:keyName
```

However, both

`my-project:keyName:subKey`
`my-project:config:keyName:subKey`

are now disallowed.

I considered allowing colons in subkeys, but I think it adds more
confusion (due to the interaction with how we allow you elide the
project name in the default case) than is worthwhile at this point.

Fixes #2171
2018-11-20 14:14:37 -08:00
Pat Gavlin 676adf62b8
Use an explicit address when dialing plugins (#2224)
This is necessary in order for gRPC's proxy support to properly respect
NO_PROXY.

Fixes #2134.
2018-11-19 13:47:39 -08:00
Pat Gavlin bc08574136
Add an API for importing stack outputs (#2180)
These changes add a new resource to the Pulumi SDK,
`pulumi.StackReference`, that represents a reference to another stack.
This resource has an output property, `outputs`, that contains the
complete set of outputs for the referenced stack. The Pulumi account
performing the deployment that creates a `StackReference`  must have
access to the referenced stack or the call will fail.

This resource is implemented by a builtin provider managed by the engine.
This provider will be used for any custom resources and invokes inside
the `pulumi:pulumi` module. Currently this provider supports only the
`pulumi:pulumi:StackReference` resource.

Fixes #109.
2018-11-14 13:33:35 -08:00
Matt Ellis 992b048dbf Adopt golangci-lint and address issues
We run the same suite of changes that we did on gometalinter. This
ended up catching a few new issues, some of which were addressed and
some of which were baselined.
2018-11-08 14:11:47 -08:00
Joe Duffy 9aedb234af
Tidy up some data structures (#2135)
In preparation for some workspace restructuring, I decided to scratch a
few itches of my own in the code:

* Change project's RuntimeInfo field to just Runtime, to match the
  serialized name in JSON/YAML.

* Eliminate the no-longer-used Context and NoDefaultIgnores fields on
  project, and all of the associated legacy PPC-related code.

* Eliminate the no-longer-used IgnoreFile constant.

* Remove a bunch of "// nolint: lll" annotations, and simply format
  the structures with comments on dedicated lines, to avoid overly
  lengthy lines and lint suppressions.

* Mark Dependencies and InitErrors as `omitempty` in the JSON
  serialization directives for CheckpointV2 files. This was done for
  the YAML directives, but (presumably accidentally) omitted for JSON.
2018-11-01 08:28:11 -07:00
Pat Gavlin b748935753
Do not assert on duplicate resources. (#2127)
Just what it says on the tin.
2018-10-31 10:33:00 -07:00
James Nugent 59a8a7fbfe Add Go 1.10+ versions of archive hashes in tests
Go 1.10 made some breaking changes to the headers in archive/tar [1] and
archive/zip [2], breaking the expected values in tests. In order to keep
tests passing with both, wherever a hardcoded hash is expect we switch
on `runtime.Version()` to select whether we want the Go 1.9 (currently
supported Go version) or later version of the hash.

Eventually these switches should be removed in favour of using the later
version only, so they are liberally commented to explain the reasoning.

[1]: https://golang.org/doc/go1.10#archive/tar
[2]: https://golang.org/doc/go1.10#archive/zip
2018-10-30 21:59:36 -05:00
Sean Gillespie ca540cc736 Use math.MaxInt32 to signal unbounded parallelism
Downlevel versions of the Pulumi Node SDK assumed that a parallelism
level of zero implied serial execution, which current CLIs use to signal
unbounded parallelism. This commit works around the downlevel issue by
using math.MaxInt32 to signal unbounded parallelism.
2018-10-29 12:27:03 -07:00
Pat Gavlin 72a6ed320c
Do not propose replacement of providers in preview. (#2088)
Providers with unknown properties are currently considered to require
replacement. This was intended to indicate that we could not be sure
whether or not replacement was reqiuired. Unfortunately, this was not a
good user experience, as replacement would never be required at runtime.
This caused quite a bit of confusion--never proposing replacement seems
to be the better option.
2018-10-23 10:23:28 -07:00
Pat Gavlin f465fc0a48
Reorder an error check in the provider registry. (#2078)
The provider registry was checking for a `nil` provider instance before
checking for a non-nil error. This caused the CLI to fail to report
important errors during the plugin load process (e.g. invalid checkpoint
errors) and instead report a failure to find a matching plugin.
2018-10-19 17:22:50 -07:00
Sean Gillespie 3e9b210edd
Default to unbounded parallelism (#2065)
Some providers (namely Kubernetes) require unbounded parallelism in
order to function correctly. This commit enables the engine to operate
in a mode with unbounded parallelism and switches to that mode by
default.
2018-10-17 15:33:26 -07:00
Sean Gillespie 730a929c2b
Add new 'pulumi state' command for editing state (#2024)
* Add new 'pulumi state' command for editing state

This commit adds 'pulumi state unprotect' and 'pulumi state delete', two
commands that can be used to unprotect and delete resources from a
stack's state, respectively.

* Simplify LocateResource

* CR: Print yellow 'warning' before editing state

* Lots of CR feedback

* CR: Only delete protected resources when asked with --force
2018-10-15 09:52:55 -07:00
Sean Gillespie ae87c469c5
Improve error message for missing provider plugins (#2040)
If a version is available, the returned error now includes the version
that was searched for and a command to install the missing plugin.
2018-10-10 15:18:41 -07:00
Pat Gavlin 74df0e67db
Allow previews when operations are pending. (#1999)
The preview will proceed as if the operations had not been issued (i.e.
we will not speculate on a new state for the stack). This is consistent
with our behavior prior to the changes that added pending operations to
the checkpoint.
2018-10-01 09:48:48 -07:00
Sean Gillespie ed0353e251
Process deletions conservatively in parallel (#1963)
* Process deletions conservatively in parallel

This commit allows the engine to conservatively delete resources in
parallel when it is sure that it is legal to do so. In the absence of a
true data-flow oriented step scheduler, this approach provides a
significant improvement over the existing serial deletion mechanism.

Instead of processing deletes serially, this commit will partition the
set of condemned resources into sets of resources that are known to be
legally deletable in parallel. The step executor will then execute those
independent lists of steps one-by-one until all steps are complete.

* CR: Make ResourceSet a normal map

* Only use the dependency graph if we can trust it

* Reverse polarity of pendingDeletesAreReplaces

* CR: un-export a few types

* CR: simplify control flow in step generator when scheduling

* CR: parents are dependencies, fix loop index

* CR: Remove ParentOf, add new test for parent dependencies
2018-09-27 15:49:08 -07:00
joeduffy 3468393490 Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.

* Stylize the preview/update summary differently, so that it stands
  out as a section. Also highlight the total changes with bold -- it
  turns out this has a similar effect to the bright white colorization,
  just without the negative effects on e.g. white terminals.

* Eliminate some verbosity in the phrasing of change summaries.

* Make all heading sections stylized consistently. This includes
  the color (bright magenta) and the vertical spacing (always a newline
  separating headings). We were previously inconsistent on this (e.g.,
  outputs were under "---outputs---"). Now   the headings are:
  Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.

* Fix an issue where we'd parent things to "global" until the stack
  object later showed up. Now we'll simply mock up a stack resource.

* Don't show messages like "no change" or "unchanged". Prior to the
  light black removal, these faded into the background of the terminal.
  Now they just clutter up the display. Similar to the elision of "*"
  for OpSames in a prior commit, just leave these out. Now anything
  that's written is actually a meaningful status for the user to note.

* Don't show the "3 info messages," etc. summaries in the Info column
  while an update is ongoing. Instead, just show the latest line. This
  is more respectful of width -- I often find that the important
  messages scroll off the right of my screen before this change.

    For discussion:

        - I actually wonder if we should eliminate the summary
          altogether and always just show the latest line. Or even
          blank it out. The summary feels better suited for the
          Diagnostics section, and the Status concisely tells us
          how a resource's update ended up (failed, succeeded, etc).

        - Similarly, I question the idea of showing only the "worst"
          message. I'd vote for always showing the latest, and again
          leaving it to the Status column for concisely telling the
          user about the final state a resource ended up in.

* Stop prepending "info: " to every stdout/stderr message. It adds
  no value, clutters up the display, and worsens horizontal usage.

* Lessen the verbosity of update headline messages, so we now instead
  of e.g. "Previewing update of stack 'x':", we just say
  "Previewing update (x):".

* Eliminate vertical whitespace in the Diagnostics section. Every
  independent console.out previously was separated by an entire newline,
  which made the section look cluttered to my eyes. These are just
  streams of logs, there's no reason for the extra newlines.

* Colorize the resource headers in the Diagnostic section light blue.

Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 08:43:46 -07:00
joeduffy be9ead855d Eliminate the same prefix
Recently, we eliminated bright black text, which IMHO makes the
"same" lines really stand out more than we want them to. This is
partly just due to the heavyweight nature of the "*" character,
which we precede every line with. This has the effect of making it
toughter to scan the update to see what's going to happen. The goal
of SpecUnimportant (bright black) was that we wanted to draw less
attention to certain elements of the CLI text -- and have them fade
into the background (apparently it was too successful at this ;-))

So, this change eliminates the "*" prefix for same operations
altogether. It reads better to my eyes and keeps the original intent.
2018-09-22 13:34:43 -07:00
Pat Gavlin 8f55a379f6
Return errors for unknown/invalid path archives. (#1970)
- Attempting to read an archive with an unknown type now returns an
  error
- Attempting to read a path archive that is neither a known archive
  format or a directory now returns an error

Fixes #1529.
Fixes #1953.
2018-09-21 11:53:39 -07:00
Sean Gillespie 2d4a3f7a6a
Move management of root resource state to engine (#1944)
* Protobuf changes

* Move management of root resource state to engine

This commit fixes a persistent side-by-side issue in the NodeJS SDK by
moving the management of root resource state to the engine. Doing so
adds two new endpoints to the Engine gRPC service: 1) GetRootResource
and 2) SetRootResource, which get and set the root resource
respectively.

* Rebase against master, regenerate proto
2018-09-18 11:47:34 -07:00
Sean Gillespie a35aba137b
Retire pending deletions at start of plan (#1886)
* Retire pending deletions at start of plan

Instead of letting pending deletions pile up to be retired at the end of
a plan, this commit eagerly disposes of any pending deletions that were
pending at the end of the previous plan. This is a nice usability win
and also reclaims an invariant that at most one resource with a given
URN is live and at most one is pending deletion at any point in time.

* Rebase against master

* Fix a test issue arising from shared snapshots

* CR feedback

* plan -> replacement

* Use ephemeral statuses to communicate deletions
2018-09-10 16:48:14 -07:00
Pat Gavlin 4a550e308f
Fix provider cancellation. (#1914)
We signal provider cancellation by hangning a goroutine off of the plan
executor's parent context. To ensure clean shutdown, this goroutine also
listens on a channel that closes once the plan has finished executing.
Unfortunately, we were closing this channel too early, and the close was
racing with the cancellation signal. These changes ensure that the
channel closes after the plan has fully completed.

Fixes #1906.
Fixes pulumi/pulumi-kubernetes#185.
2018-09-10 15:18:25 -07:00
Sean Gillespie 679f55c355
Validate type tokens before using them (#1904)
* Validate type tokens before using them

When registering or reading a resource, we take the type token given to
us from the language host and assume that it's valid, which resulted in
assertion failures in various places in the engine. This commit
validates the format of type tokens given to us from the language host
and issues an appropriate error if it's not valid.

Along the way, this commit also improves the way that fatal exceptions
are rendered in the Node language host.

* Pre-allocate an exception for ReadResource

* Fix integration test

* CR Feedback

This commit is a lower-impact change that fixes the bugs associated with
invalid types on component resources and only checks that a type is
valid on custom resources.

* CR Take 2: Fix up IsProviderType instead of fixing call sites

* Please gometalinter
2018-09-07 15:19:18 -07:00
Sean Gillespie ca58b8117f
Clarify control flow in step generator (#1843)
* Introduce Result type to engine

The Result type can be used to signal the failure of a computation due
to both internal and non-internal reasons. If a computation failed due
to an internal error, the Result type carries that error with it and
provides it when the 'Error' method on a Result is called. If a
computation failed gracefully, but wished to bail instead of continue a
doomed plan, the 'Error' method provides a value of null.

* CR feedback
2018-09-05 15:08:09 -07:00
Pat Gavlin df1a5e653d
Fail refreshes with init errors. (#1882)
And ensure that refreshes continue on errors.

Fixes #1881.
2018-09-05 14:00:28 -07:00
Alex Clemmer dea68b8b37 Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.

In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.

The original commit message in that PR was:

> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 15:56:53 -07:00
Alex Clemmer 9e58fd1aaa Revert "Plumb LogRequest.IsStatus through the logging subsystem"
This reverts commit 3066cbcbd7.
2018-08-31 15:56:53 -07:00
Alex Clemmer 3066cbcbd7 Plumb LogRequest.IsStatus through the logging subsystem 2018-08-30 17:17:20 -07:00
Pat Gavlin 0634c1852a
Fix some potential bugs in the refresher. (#1845)
- Create all refresh steps before issuing any. This is important as the
  state update loop expects all steps to exist.
- Check for cancellation later in the refresher.

This also fixes races in the SnapshotManager and the test journal that
could cause panics during cancellation.
2018-08-29 21:00:05 -07:00
Alex Clemmer 2fa98a8dad Generate empty update steps for partial failures
This commit will greatly improve the experience of dealing with partial
failures by simply re-trying to initialize the relevant resources on
every subsequent `pulumi up`, instead of printing a list of reasons the
resource had previously failed to initialize.

As motivation, consider our behavior in the following common, painful
scenario:

  * The user creates a `Service` and a `Deployment`.
  * The `Pod`s in the `Deployment` fail to become live. This causes the
    `Service` to fail, since it does not target any live `Pod`s.
  * The user fixes the `Deployment`. A run of `pulumi up` sees the
    `Pod`s successfully initialize.
  * Users will expect that the `Service` is now in a state of success,
    as the `Pod`s it targets are alive. But, because we don't update the
    `Service` by default, it perpetually exists in a state of error.
  * The user is now required to change some trivial feature of the
    `Service` just to trigger an update, so that we can see it succeed.

There are many situations like this. Another very common one is waiting
for test `Pod`s that are meant to successfully complete when some object
becomes live.

By triggering an empty update step for all resources that have any
initialization errors, we avoid all problems like this.

This commit will implement this empty-update semantics for partial
failures, as well as fix the display UX to correctly render the diff in
these cases.
2018-08-28 18:00:35 -07:00
Pat Gavlin 73f4f2c464
Reimplement refresh. (#1814)
Replace the Source-based implementation of refresh with a phase that
runs as the first part of plan execution and rewrites the snapshot in-memory.

In order to fit neatly within the existing framework for resource operations,
these changes introduce a new kind of step, RefreshStep, to represent
refreshes. RefreshSteps operate similar to ReadSteps but do not imply that
the resource being read is not managed by Pulumi.

In addition to the refresh reimplementation, these changes incorporate those
from #1394 to run refresh in the integration test framework.

Fixes #1598.
Fixes pulumi/pulumi-terraform#165.
Contributes to #1449.
2018-08-22 17:52:46 -07:00
Pat Gavlin 91e20289a8
Remove deploy.PlanSummary. (#1809)
Nothing actually uses this information.
2018-08-21 19:33:59 -07:00
Pat Gavlin 734f7beba7
Refactor plan execution in preparation for refresh. (#1801)
These changes simplify a couple aspects of plan execution in the hopes of
clarifying some responsibilities and preparing the code for changes to the
implementation of refresh.

1. All aspects of plan execution are now managed by the plan executor,
   which is no longer exported. Instead, it is abstracted behind
   `Plan.Execute`.
2. The plan executor's error-handling and reporting have been unified
   and simplified somewhat.
2018-08-21 14:05:00 -07:00
Sean Gillespie d1524e1081
Log errors coming from the language host (#1780)
* Log errors coming from the language host

Similar to pulumi/pulumi#1762, fixes pulumi/pulumi#1775. The language
host can fail without issuing any diagnostics and it is very unclear
what happens if the engine does not log the error.

* CR feedback
2018-08-20 16:41:07 -07:00
Matt Ellis acf0fb278a Fix wierd interactions due to Cobra and glog
The glog package force the use of golang's underyling flag package,
which Cobra does not use. To work around this, we had a complicated
dance around defining flags in multiple places, calling flag.Parse
explicitly and then stomping values in the flag package with values we
got from Cobra.

Because we ended up parsing parts of the command line twice, each with
a different set of semantics, we ended up with bad UX in some
cases. For example:

`$ pulumi -v=10 --logflow update`

Would fail with an error message that looked nothing like normal CLI
errors, where as:

`$ pulumi -v=10 update --logflow`

Would behave as you expect. To address this, we now do two things:

- We never call flag.Parse() anymore. Wacking the flags with values we
  got from Cobra is sufficent for what we care about.

- We use a forked copy of glog which does not complain when
  flag.Parse() is not called before logging.

Fixes #301
Fixes #710
Fixes #968
2018-08-20 14:08:40 -07:00
CyrusNajmabadi 89d5dd004e
Do not add the same file multiple times to a zip or tar file. (#1783) 2018-08-15 22:44:55 -07:00
Sean Gillespie d83774ca96
Fix two issues that can cause hangs during refresh (#1770)
1. 'readID' was never assigned to and was always the default value,
leading the refresh source to believe a resource was deleted
2. The refresh source could hang when a resource is deleted.
2018-08-13 21:43:10 -07:00
Sean Gillespie d9c56eab23
Emit diagnostics for errors coming from step generation (#1766)
The plan executor assumed that the step generator was responsible for
logging its own diagnostics, which it sort-of is but also doesn't log a
majority of the diagnositcs that come out of it. This commit logs all
errors coming out of step generation so that we don't unintentionally
drop errors.
2018-08-13 13:41:01 -07:00
Sean Gillespie 491bcdc602
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment

This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.

The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.

To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.

At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.

* CR: Multi-line string literals, renaming in-flight -> pending

* CR: Add enum to apitype for operation type, also name status -> type for clarity

* Fix the yaml type

* Fix missed renames

* Add implementation for lifecycle_test.go

* Rebase against master
2018-08-10 21:39:59 -07:00
Alex Clemmer a172f1a048 Implement partial Read
Some time ago, we introduced the concept of the initialization error to
Pulumi (i.e., an error where the resource was successfully created but
failed to fully initialize). This was originally implemented in `Create`
and `Update`  methods of the resource provider interface; when we
detected an initialization failure, we'd pack the live version of the
object into the error, and return that to the engine.

Omitted from this initial implementation was a similar semantics for
`Read`. There are many implications of this, but one of them is that a
`pulumi refresh` will erase any initialization errors that had
previously been observed, even if the initialization errors still exist
in the resource.

This commit will introduce the initialization error semantics to `Read`,
fixing this issue.
2018-08-10 15:10:14 -07:00
Sean Gillespie 623e5f1be2
Emit reads for external resources when refreshing (#1749)
* Emit reads for external resources when refreshing

Fixes pulumi/pulumi#1744. This commit educates the refresh source about
external resources. If a refresh source encounters a resource with the
External bit set, it'll send a Read event to the engine and the engine
will process it accordingly.

* CR: save last event channel instead of last event, style fixes
2018-08-09 12:45:39 -07:00
Sean Gillespie 9a5f4044fa
Serialize SourceEvents coming from the refresh source (#1725)
* Serialize SourceEvents coming from the refresh source

The engine requires that a source event coming from a source be "ready
to execute" at the moment that it is sent to the engine. Since the
refresh source sent all goal states eagerly through its source iterator,
the engine assumed that it was legal to execute them all in parallel and
did so. This is a problem for the snapshot, since the snapshot expects
to be in an order that is a legal topological ordering of the dependency
DAG.

This PR fixes the issue by sending refresh source events one-at-a-time
through the refresh source iterator, only unblocking to send the next
step as soon as the previous step completes.

* Fix deadlock in refresh test

* Fix an issue where the engine "completed" steps too early

By signalling that a step is done before committing the step's results
to the snapshot, the engine was left with a race where dependent
resources could find themselves completely executed and committed before
a resource that they depend on has been committed.

Fixes pulumi/pulumi#1726

* Fix an issue with Replace steps at the end of a plan

If the last step that was executed successfully was a Replace, we could
end up in a situation where we unintentionally left the snapshot
invalid.

* Add a test

* CR: pass context.Context as first parameter to Iterate

* CR: null->nil
2018-08-08 13:45:48 -07:00
Pat Gavlin 5513f08669
Do not call Read in read steps with unknown IDs. (#1734)
This is consistent with the behavior prior to the introduction of Read
steps. In order to avoid a breaking change we must do this check in the
engine itself, which causes a bit of a layering violation: because IDs
are marshaled as raw strings rather than PropertyValues, the engine must
check against the marshaled form of an unknown directly (i.e.
`plugin.UnknownStringValue`).
2018-08-08 12:06:20 -07:00
Pat Gavlin 94802f5c16
Fix deletes with duplicate URNs. (#1716)
When calculating deletes, we will only issue a single delete step for a
particular URN. This is incorrect in the presence of pending deletes
that share URNs with a live resource if the pending deletes follow the
live resource in the checkpoint: instead of issuing a delete for
every resource with a particular URN, we will only issue deletes for
the pending deletes.

Before first-class providers, this was mostly benigin: any remaining
resources could be deleted by re-running the destroy. With the
first-class provider changes, however, the provider for the undeleted
resources will be deleted, leaving the checkpoint in an invalid state.

These changes fix this issue by allowing the step generator to issue
multiple deletes for a single URN and add a test for this scenario.
2018-08-07 11:01:08 -07:00
Pat Gavlin a222705143
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.

### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.

These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
  or unknown properties. This is necessary because existing plugins
  only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
  values are known or "must replace" if any configuration value is
  unknown. The justification for this behavior is given
  [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
  configures the provider plugin if all config values are known. If any
  config value is unknown, the underlying plugin is not configured and
  the provider may only perform `Check`, `Read`, and `Invoke`, all of
  which return empty results. We justify this behavior becuase it is
  only possible during a preview and provides the best experience we
  can manage with the existing gRPC interface.

### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.

All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.

Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.

### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
  provider plugins. It must be possible to add providers from a stack's
  checkpoint to this map and to register new/updated providers during
  the execution of a plan in response to CRUD operations on provider
  resources.
- In order to support updating existing stacks using existing Pulumi
  programs that may not explicitly instantiate providers, the engine
  must be able to manage the "default" providers for each package
  referenced by a checkpoint or Pulumi program. The configuration for
  a "default" provider is taken from the stack's configuration data.

The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").

The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.

During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.

While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.

### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
  to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
  a particular provider instance to manage a `CustomResource`'s CRUD
  operations.
- A new type, `InvokeOptions`, can be used to specify options that
  control the behavior of a call to `pulumi.runtime.invoke`. This type
  includes a `provider` field that is analogous to
  `ResourceOptions.provider`.
2018-08-06 17:50:29 -07:00
Sean Gillespie 0d0129e400
Execute non-delete step chains in parallel (#1673)
* Execute chains of steps in parallel

Fixes pulumi/pulumi#1624. Since register resource steps are known to be
ready to execute the moment the engine sees them, we can effectively
parallelize all incoming step chains. This commit adds the machinery
necessary to do so - namely a step executor and a plan executor.

* Remove dead code

* CR: use atomic.Value to be explicit about what values are atomically loaded and stored

* CR: Initialize atomics to 'false'

* Add locks around data structures in event callbacks

* CR: Add DegreeOfParallelism method on Options and add comment on select in Execute

* CR: Use context.Context for cancellation instead of cancel.Source

* CR: improve cancellation

* Rebase against master: execute read steps in parallel

* Please gometalinter

* CR: Inline a few methods in stepExecutor

* CR: Feedback and bug fixes

1. Simplify step_executor.go by 'bubbling' up errors as far as possible
and reporting diagnostics and cancellation in one place
2. Fix a bug where the CLI claimed that a plan was cancelled even if it
wasn't (it just has an error)

* Comments

* CR: Add comment around problematic select, move workers.Add outside of goroutine, return instead of break
2018-08-06 16:46:17 -07:00
Matt Ellis 645ca2eb56 Allow more types for runtimeOptions
Instead of just allowing booleans, type the map as string->interface{}
so in the future we could pass string or number options to the
language host.
2018-08-06 14:00:58 -07:00
Matt Ellis ce5eaa8343 Support TypeScript in a more first-class way
This change lets us set runtime specific options in Pulumi.yaml, which
will flow as arguments to the language hosts. We then teach the nodejs
host that when the `typescript` is set to `true` that it should load
ts-node before calling into user code. This allows using typescript
natively without an explicit compile step outside of Pulumi.

This works even when a tsconfig.json file is not present in the
application and should provide a nicer inner loop for folks writing
typescript (I'm pretty sure everyone has run into the "but I fixed
that bug!  Why isn't it getting picked up?  Oh, I forgot to run tsc"
problem.

Fixes #958
2018-08-06 14:00:58 -07:00
Sean Gillespie 48aa5e73f8
Save resources obtained from ".get" in the snapshot (#1654)
* Protobuf changes to record dependencies for read resources

* Add a number of tests for read resources, especially around replacement

* Place read resources in the snapshot with "external" bit set

Fixes pulumi/pulumi#1521. This commit introduces two new step ops: Read
and ReadReplacement. The engine generates Read and ReadReplacement steps
when servicing ReadResource RPC calls from the language host.

* Fix an omission of OpReadReplace from the step list

* Rebase against master

* Transition to use V2 Resources by default

* Add a semantic "relinquish" operation to the engine

If the engine observes that a resource is read and also that the
resource exists in the snapshot as a non-external resource, it will not
delete the resource if the IDs of the old and new resources match.

* Typo fix

* CR: add missing comments, DeserializeDeployment -> DeserializeDeploymentV2, ID check
2018-08-03 14:06:00 -07:00
CyrusNajmabadi 58ae413bca
Do not add a newline between a stream of info messages. The contract is that these will just be appended continguously. (#1688) 2018-08-02 10:55:15 -04:00
Sean Gillespie c4f08db78b
Fix an issue where we fail to catch duplicate URNs in the same plan (#1687) 2018-08-01 21:32:29 -07:00
Alex Clemmer b0c383e1e8 Add URN argument to HostClient#Log
This will make it possible to correlate log messages to specific
resources.
2018-08-01 11:00:48 -07:00
Sean Gillespie ad3b931bea
Merge pull request #1664 from pulumi/swgillespie/resource-break
Round out the breaking changes for ResourceV2
2018-07-27 18:44:37 -07:00
Pat Gavlin d70d36b887
Handle non-{unknown,partial} statuses in Create/Update. (#1663)
For example resource.StatusOK. Unenlightened providers may return errors
that result in this status; we should not crash if this is the case.
2018-07-27 16:08:11 -07:00
Sean Gillespie 48d095d66f Remove "down" operation for migration - it is not semantically valid in
general.
2018-07-27 14:52:28 -07:00
Alex Clemmer f037c7d143 Checkpoint resource initialization errors
When a resource fails to initialize (i.e., it is successfully created,
but fails to transition to a fully-initialized state), and a user
subsequently runs `pulumi update` without changing that resource, our
CLI will fail to warn the user that this resource is not initialized.

This commit begins the process of allowing our CLI to report this by
storing a list of initialization errors in the checkpoint.
2018-07-20 17:59:06 -07:00
Sean Gillespie 80c28c00d2
Add a migrate package for migrating to and from differently-versioned API types (#1647)
* Add a migrate package for migrating to and from differently-versioned
API types

* travis: gofmt -s deployment_test.go
2018-07-20 13:31:41 -07:00
Sean Gillespie 57ae7289f3
Work around a potentially bad assert in the engine (#1644)
* Work around a potentially bad assert in the engine

The engine asserts if presented with a plan that deletes the same URN
more than once. This has been empirically proven to be possible, so I am
removing the assert.

* CR: Add log for multiple pending-delete deletes
2018-07-18 15:48:55 -07:00
Alex Clemmer d182525fec Add signal cancellation to resource provider 2018-07-15 11:05:44 -10:00
Sean Gillespie 89f2f8abb5
[Parallelism] Introduce a "step generator" component in the engine (#1622)
* [Parallelism] Introduce a "step generator" component by refactoring all
step generation logic out of PlanIterator

* CR: remove dead fields on PlanIterator
2018-07-12 14:24:38 -07:00
CyrusNajmabadi 4761a32cc1
Add support for providing a log stream-id to our RPC interface. (#1627) 2018-07-11 15:04:00 -07:00
Alex Clemmer 582a002ccb FIX: Build break in partial update integration test 2018-07-06 18:03:42 -07:00
Alex Clemmer db9277f934
Merge pull request #1590 from hausdorff/hausdorff/partial-update
Add partial state update
2018-07-06 16:14:49 -07:00
Alex Clemmer 456deaf442 Small cleanups and comments 2018-07-06 15:57:08 -07:00
Joe Duffy b9a0907161
Support empty text assets (#1599)
Per #1481, we unintentionally disallowed empty text assets, simply
because of the way we did discriminated union testing.  It's easy
enough to just treat the empty asset like an empty string asset.
2018-07-05 14:30:35 -07:00
Sean Gillespie 1cbf8bdc40 Partial status for resource providers
This commit adds CLI support for resource providers to provide partial
state upon failure. For resource providers that model resource
operations across multiple API calls, the Provider RPC interface can now
accomodate saving bags of state for resource operations that failed.
This is a common pattern for Terraform-backed providers that try to do
post-creation steps on resource as part of Create or Update resource
operations.
2018-07-02 13:32:23 -07:00
Pat Gavlin 0c8dedec22
Issue an error when attempting to read an invalid Asset. (#1528)
As an alternative we could consider simply returning an empty `Blob`.
I think it's better to fail loudly, though, as this situation can result
from unexpected/invalid archives (which we need to do a better job of
validating).

Fixes #1493.
2018-06-18 11:08:17 -07:00
Matthew Riley bdb3e98aa0 Use an even more reasonable modtime in ZIP entries
The ZIP format started with MS-DOS dates, which start in 1980. Other dates
have been layered on, but the ZIP file handler used by Azure websites still
relies on the MS-DOS dates.

Using the Unix epoch here (1970) results in ZIP entries that (e.g.)
OSX `unzip` sees as 12-31-1969 (timezones) but Azure websites sees as
01/01/2098.
2018-06-11 19:36:04 -07:00
Pat Gavlin e6849a283f Appease linters.
- Fix a couple self-assignment issues in the Go language support
- Disable `megacheck` for `fh.SetModTime`, which we use for go1.9
  compat.
2018-06-11 14:32:27 -07:00
joeduffy 7d8995991b Support Pulumi programs written in Go
This adds rudimentary support for Pulumi programs written in Go.  It
is not complete yet but the basic resource registration works.

Note that, stylistically speaking, Go is a bit different from our other
languages.  This made it a bit easier to build this initial prototype,
since what we want is actually a rather thin veneer atop our existing
RPC interfaces.  The lack of generics, however, adds some friction and
is something I'm continuing to hammer on; this will most likely lead to
little specialized types (e.g. StringOutput) once the dust settles.

There are two primary components:

1) A new language host, `pulumi-language-go`, which is responsible for
   communicating with the engine through the usual gRPC interfaces.
   Because Go programs are pre-compiled, it very simply loads a binary
   with the same name as the project.

2) A client SDK library that Pulumi programs bind against.  This exports
   the core resource types -- including assets -- properties -- including
   output properties -- and configuration.

Most remaining TODOs are marked as such in the code, and this will not
be merged until they have been addressed, and some better tests written.
2018-06-08 10:36:10 -07:00
Matthew Riley 9916cd5e6b
Merge pull request #1475 from pulumi/zip-header
Use a reasonable value for Modified in ZIP headers
2018-06-07 14:40:10 -07:00
Matthew Riley 2fe284a139 Use a reasonable value for Modified in ZIP headers
Azure websites can't extract the archive without this.

We can't use the `Modified` field of `FileHeader` because it was added in
Go 1.10.
2018-06-07 14:21:03 -07:00
Sean Gillespie b5e4d87687
Improve the error message when data source invocations fail (#1472) 2018-06-07 11:21:38 -07:00
Sean Gillespie 924c49d7e0
Fail fast when attempting to load a too-new or too-old deployment (#1382)
* Error when loading a deployment that is not a version that the CLI understands

* Add a test for 'pulumi stack import' on a badly-versioned deployment

* Move current deployment version to 'apitype'

* Rebase against master

* CR: emit CLI-friendly error message at the two points outside of the engine calling 'DeserializeDeployment'
2018-05-25 13:29:59 -07:00
Sean Gillespie 1a51507206
Delete Before Create (#1365)
* Delete Before Create

This commit implements the full semantics of delete before create. If a
resource is replaced and requires deletion before creation, the engine
will use the dependency graph saved in the snapshot to delete all
resources that depend on the resource being replaced prior to the
deletion of the resource to be replaced.

* Rebase against master

* CR: Simplify the control flow in makeRegisterResourceSteps

* Run Check on new inputs when re-creating a resource

* Fix an issue where the planner emitted benign but incorrect deletes of DBR-deleted resources

* CR: produce the list of dependent resources in dependency order and iterate over the list in reverse

* CR: deps->dependents, fix an issue with DependingOn where duplicate nodes could be added to the dependent set

* CR: Fix an issue where we were considering old defaults and new inputs
inappropriately when re-creating a deleted resource

* CR: save 'iter.deletes[urn]' as a local, iterate starting at cursorIndex + 1 for dependency graph
2018-05-23 14:43:17 -07:00
joeduffy 5967259795 Add license headers 2018-05-22 15:02:47 -07:00
Sean Gillespie 7b7870cdaa
Remove unused stack name from deploy.Snapshot (#1386) 2018-05-18 11:15:35 -07:00
Sean Gillespie 68911900fd
Graceful shutdown (#1320)
* Graceful RPC shutdown: CLI side

* Handle unavailable resource monitor in language hosts

* Fix a comment

* Don't commit package-lock.json

* fix mangled pylint pragma

* Rebase against master and fix Gopkg.lock

* Code review feedback

* Fix a race between closing the callerEventsOpt channel and terminating a goroutine that writes to it

* glog -> logging
2018-05-16 15:37:34 -07:00
CyrusNajmabadi 72e00810c4
Filter the logs we emit to glog so that we don't leak out secrets. (#1371) 2018-05-15 15:28:00 -07:00
Joe Duffy 369c619ab9
Skip loading language plugins when not needed (#1367)
In pulumi/pulumi#1356, we observed that we can fail during a destroy
because we attempt to load the language plugin, which now eagerly looks
for the @pulumi/pulumi package.

This is also blocking ingestion of the latest engine bits into the PPC.

It turns out that for destroy (and refresh), we have no need for the
language plugin.  So, let's skip loading it when appropriate.
2018-05-14 20:32:53 -07:00
Joe Duffy 457c34ff50
Add a PULUMI_DEV flag, and suppress warnings (#1361)
This change suppresses the warning

    warning: resource plugin aws is expected to have version >=0.11.3,
        but has 0.11.1-dev-1523506162-g06ec765; the wrong version may
        be on your path, or this may be a bug in the plugin

when the PULUMI_DEV envvar is set to a truthy value.

This warning keeps popping up in demos since I'm always using dev builds
and I'd like a way to shut it off, even though this can legitimately
point out a problem.  Eventually I'll switch to official buildsa but,
until then, it seems worth having a simple way to suppress.
2018-05-11 20:59:01 -07:00
Pat Gavlin bb5b7da650 Revert "Revert the changes from #1261." 2018-05-08 11:46:15 -07:00
Pat Gavlin 97ace29ab1
Begin tracing Pulumi API calls. (#1330)
These changes enable tracing of Pulumi API calls.

The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.

In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
2018-05-07 18:23:03 -07:00
CyrusNajmabadi 092696948d
Restore streaming of plugin outputs to the progress display. (#1333) 2018-05-07 15:11:52 -07:00
Joe Duffy f92eb0a4e8 Run Configure calls in parallel (#1321)
As of this change, the engine will run all Configure calls in parallel.
This improves startup performance, since otherwise, we would block
waiting for all plugins to be configured before proceeding to run a
program.  Emperically, this is about 1.5-2s for AWS programs, and
manifests as a delay between the purple "Previewing update of stack"
being printed, and the corresponding grey "Previewing update" message.

This is done simply by using a Goroutine for Configure, and making sure
to synchronize on all actual CRUD operations.  I toyed with using double
checked locking to eliminate lock acquisitions -- something we may want
to consider as we add more fine-grained parallelism -- however, I've
kept it simple to avoid all the otherwise implied memory model woes.

I made the judgment call that GetPluginInfo may proceed before
Configure has settled.  (Otherwise, we'd immediately call it and block
after loading the plugin, obviating the parallelism benefits.)  I also
made the judgment call to do this in the engine, after flip flopping
several times about possibly making it a provider's own decision.
2018-05-04 14:29:47 -07:00
Pat Gavlin 639e605bed Revert the changes from #1261.
Restore the provider-first diff logic these changes disabled.

Part of #1251.
2018-05-01 10:01:18 -07:00
Luke Hoban 7dac925bcf
Fix handling of nested archives (#1283)
AssetArchives which include nested archives were not embedding content from the nested archive underneath the key of the nest archive.

Fixes #1272.
2018-04-27 17:10:50 -07:00
Sean Gillespie 14baf866f6
Snapshot management overhaul and refactor (#1273)
* Refactor the SnapshotManager interface

Lift snapshot management out of the engine by delegating it to the
SnapshotManager implementation in pkg/backend.

* Add a event interface for plugin loads and use that interface to record plugins in the snapshot

* Remove dead code

* Add comments to Events

* Add a number of tests for SnapshotManager

* CR feedback: use a successful bit on 'End' instead of having a separate 'Abort' API

* CR feedback

* CR feedback: register plugins one-at-a-time instead of the entire state at once
2018-04-25 17:20:08 -07:00
pat@pulumi.com ca2e996428 Work around issue #1251.
This issue arises becuase the behavior we're currently getting from Diff
for TF-based providers differs from the behavior we expect. We are
presenting the provider with the old state and new inputs. If the old
state contains output properties that differ from the new inputs, the
provider will detect a diff where we may expect no changes.

Rather than deferring to the provider for all diffs, these changes only
defer to the provider if a legacy diff was detected (i.e. there is some
difference between the old and new provider-calculated inputs).
2018-04-24 10:35:27 -07:00
pat@pulumi.com 42b67a4dbc Fix a couple of typos.
- Restore a dropped "update"
- %s/Resouce/Resource/g
2018-04-20 11:09:54 -07:00
Matt Ellis cc938a3bc8 Merge remote-tracking branch 'origin/master' into ellismg/identity 2018-04-20 01:56:41 -04:00
joeduffy bac58d7922 Respond to CR feedback
Incorporate feedback from @swgillespie and @pgavlin.
2018-04-18 11:46:37 -07:00
joeduffy b77403b4bb Implement a refresh command
This change implements a `pulumi refresh` command.  It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.

It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events.  This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state.  This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion).  The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.

Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint.  That
will be coming soon ...
2018-04-18 10:57:16 -07:00
Matt Ellis b5129dba19 Don't leak a file from asset_test.go 2018-04-18 04:54:02 -07:00
Matt Ellis bac02f1df1 Remove the need to pulumi init for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.

Previously, `pulumi init` logically did two things:

1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.

2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com

The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.

However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.

For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.

For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.

This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).

With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-18 04:53:49 -07:00
Sean Gillespie 55711e4ca3
Revert "Lift snapshot management out of the engine and serialize writes to snapshot (#1069)" (#1216)
This reverts commit 2c479c172d.
2018-04-16 23:04:56 -07:00
Joe Duffy 10f4f2c7c4
Revert "Temporarily work around pulumi/pulumi#1147 (#1161)" (#1207)
This reverts commit 9c243720a6.
2018-04-16 12:29:52 -07:00
CyrusNajmabadi 7b96b8cdcf
Produce a single message for the text we receive when running, not a message per line of output. (#1191) 2018-04-13 15:44:35 -07:00
Joe Duffy 9c243720a6
Temporarily work around pulumi/pulumi#1147 (#1161)
This reverts back to our old diff behavior, temporarily, while we
work on a fix to pulumi/pulumi#1147 and validate that it works broadly.
2018-04-12 10:59:25 -07:00
Sean Gillespie 2c479c172d
Lift snapshot management out of the engine and serialize writes to snapshot (#1069)
* Lift snapshot management out of the engine

This PR is a prerequisite for parallelism by addressing a major problem
that the engine has to deal with when performing parallel resource
construction: parallel mutation of the global snapshot. This PR adds
a `SnapshotManager` type that is responsible for maintaining and
persisting the current resource snapshot. It serializes all reads and
writes to the global snapshot and persists the snapshot to persistent
storage upon every write.

As a side-effect of this, the core engine no longer needs to know about
snapshot management at all; all snapshot operations can be handled as
callbacks on deployment events. This will greatly simplify the
parallelization of the core engine.

Worth noting is that the core engine will still need to be able to read
the current snapshot, since it is interested in the dependency graphs
contained within. The full implications of that are out of scope of this
PR.

Remove dead code, Steps no longer need a reference to the plan iterator that created them

Fixing various issues that arise when bringing up pulumi-aws

Line length broke the build

Code review: remove dead field, fix yaml name error

Rebase against master, provide implementation of StackPersister for cloud backend

Code review feedback: comments on MutationStatus, style in snapshot.go

Code review feedback: move SnapshotManager to pkg/backend, change engine to use an interface SnapshotManager

Code review feedback: use a channel for synchronization

Add a comment and a new test

* Maintain two checkpoints, an immutable base and a mutable delta, and
periodically merge the two to produce snapshots

* Add a lot of tests - covers all of the non-error paths of BeginMutation and End

* Fix a test resource provider

* Add a few tests, fix a few issues

* Rebase against master, fixed merge
2018-04-12 09:55:34 -07:00
Matt Ellis 50843a98c1 Retry some HTTP operations
We've seen failures in CI where DNS lookups fail which cause our
operations against the service to fail, as well as other sorts of
timeouts.

Add a set of helper methods in a new httputil package that helps us do
retries on these operations, and then update our client library to use
them when we are doing GET requests. We also provide a way for non GET
requests to be retried, and use this when updating a lease (since it
is safe to retry multiple requests in this case).
2018-04-11 14:58:25 -07:00
Joe Duffy 479a2e6ad5
Add an ID property to ReadResponse (#1145)
The RPC provider interface needs a way to convey back to the engine
that a resource being read no longer exists.  To do this, we'll return
the ID property that was read back.  If it is empty, it means the
resource is gone.  If it is non-empty, we expect it to match the input.
2018-04-10 12:58:50 -07:00
CyrusNajmabadi a759f2e085
Switch to a resource-progress oriented view for pulumi preview/update/destroy (#1116) 2018-04-10 12:03:11 -07:00
Joe Duffy b33d4d762c
Skip reading unknown IDs (#1124)
This change skips unknown IDs during read operations.  This can happen
when a read is performed using the output property of another resource
during planning.  This is intentionally supported via ID being an
Input<ID> and all we need to do for this to work correctly is skip the
actual provider RPC and the runtime will propagate unknown outputs as
usual.
2018-04-07 07:52:10 -07:00
Joe Duffy f2ae3a7afc
Permit plugin versions to float (#1122)
This change lets plugin versions to float in two ways:

1) If a `pulumi plugin install` detects a newer version is available
   already, there's no need to download and install the older version.

2) If the engine attempts to load a plugin at a particular version,
   if a newer version is available, it will be accepted without error.

As part of this, we permit $PATH to have the final say when determining
which version to accept.  That is, it can always override the choice.

Note that I highly suspect, in the limit, that we'll want to stop doing
this for major version incompatibilities. For now, since we don't
envision any such version changes imminently, this will suffice.
2018-04-05 16:37:50 -07:00
joeduffy 5e28a4ab07 Add the ability to read an existing resource
This change wires up the new Read RPC method in such a manner that
Pulumi programs can invoke it.  This is technically not required for
refreshing state programmatically (as in pulumi/pulumi#1081), however
it's a feature we had eons ago and have wanted since (see
pulumi/pulumi#83), and will allow us to write code like

    let vm = aws.ec2.Instance.get("my-vm", "i-07043cd97bd2c9cfc");
    // use any property from here on out ...

The way this works is simply by bridging the Pulumi program via its
existing RPC connection to the engine, much like Invoke and
RegisterResource RPC requests already do, and then invoking the proper
resource provider in order to read the state.  Note that some resources
cannot be uniquely identified by their ID alone, and so an extra
resource state bag may be provided with just those properties required.

This came almost for free (okay, not exactly) and will come in handy as
we start gaining experience with reading live state from resources.
2018-04-05 09:48:09 -07:00
joeduffy 22584e7e37 Make some resource model changes
This commit changes two things about our resource model:

* Stop performing Pulumi Engine-side diffing of resource state.
  Instead, we defer to the resource plugins themselves to determine
  whether a change was made and, if so, the extent of it.  This
  manifests as a simple change to the Diff function; it is done in
  a backwards compatible way so that we continue with legacy diffing
  for existing resource provider plugins.

* Add a Read RPC method for resource providers.  It simply takes a
  resource's ID and URN, plus an optional bag of further qualifying
  state, and it returns the current property state as read back from
  the actual live environment.  Note that the optional bag of state
  must at least include enough additional properties for resources
  wherein the ID is insufficient for the provider to perform a lookup.
  It may, however, include the full bag of prior state, for instance
  in the case of a refresh operation.

This is part of pulumi/pulumi#1108.
2018-04-05 08:14:25 -07:00
Sean Gillespie a3a6101e79
Improve the error message arising from missing required configs for resource providers (#1097)
* Improve the error message arising from missing required configs for
resource providers

If the resource provider that we are speaking to is new enough, it will send
across a list of keys and their descriptions alongside an error
indicating that the provider we are configuring is missing required
config. This commit packages up the list of missing keys into an error
that can be presented nicely to the user.

* Code review feedback: renaming simplification and correcting errors in comments
2018-04-04 10:08:17 -07:00
CyrusNajmabadi 4b761f9fc1
Include richer information in events so that final display can flexibly chose how to present it. (#1088) 2018-03-31 12:08:48 -07:00
Sean Gillespie 91c550f1e0
Send structured errors across RPC boundaries (#1072)
* Send structured errors across RPC boundaries

This brings us closer to gRPC best practices where we send structured
errors with error codes across RPC endpoints. The new "rpcerrors"
package can wrap errors from RPC endpoints, so RPC servers can attach
some additional context as to why a request failed.

* Code review feedback:

1. Rename rpcerrors -> rpcerror, better package name
2. Rename RPCError -> Error, RPCErrorCause -> ErrorCause, names
suggested by gometalinter to improve their package-qualified names
3. Fix import organization in rpcerror.go
2018-03-28 17:07:35 -07:00
joeduffy 16e45dc92a Remove some outdated comments 2018-03-28 07:56:35 -07:00
joeduffy 8b5874dab5 General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):

* The primary change is to change the way the engine's core update
  functionality works with respect to deploy.Source.  This is the
  way we can plug in new sources of resource information during
  planning (and, soon, diffing).  The way I intend to model refresh
  is by having a new kind of source, deploy.RefreshSource, which
  will let us do virtually everything about an update/diff the same
  way with refreshes, which avoid otherwise duplicative effort.

  This includes changing the planOptions (nee deployOptions) to
  take a new SourceFunc callback, which is responsible for creating
  a source specific to the kind of plan being requested.

  Preview, Update, and Destroy now are primarily differentiated by
  the kind of deploy.Source that they return, rather than sprinkling
  things like `if Destroying` throughout.  This tidies up some logic
  and, more importantly, gives us precisely the refresh hook we need.

* Originally, we used the deploy.NullSource for Destroy operations.
  This simply returns nothing, which is how Destroy works.  For some
  reason, we were no longer doing this, and instead had some
  `if Destroying` cases sprinkled throughout the deploy.EvalSource.
  I think this is a vestige of some old way we did configuration, at
  least judging by a comment, which is apparently no longer relevant.

* Move diff and diff-printing logic within the engine into its own
  pkg/engine/diff.go file, to prepare for upcoming work.

* I keep noticing benign diffs anytime I regenerate protobufs.  I
  suspect this is because we're also on different versions.  I changed
  generate.sh to also dump the version into grpc_version.txt.  At
  least we can understand where the diffs are coming from, decide
  whether to take them (i.e., a newer version), and ensure that as
  a team we are monotonically increasing, and not going backwards.

* I also tidied up some tiny things I noticed while in there, like
  comments, incorrect types, lint suppressions, and so on.
2018-03-28 07:45:23 -07:00
Pat Gavlin a23b10a9bf
Update the copyright end date to 2018. (#1068)
Just what it says on the tin.
2018-03-21 12:43:21 -07:00
Joe Duffy 5924f6b8c3
Ensure destroy plugins are present (#1043)
This change uses the prior checkpoint's deployment manifest to pre-
populate all plugins required to complete the destroy operation.  This
allows for subsequent attempts to load a resource's plugin to match the
already-loaded version.  This approach obviously doesn't work in a
hypothetical future world where plugins for the same resource provider
are loaded side-by-side, but we already know that.
2018-03-12 16:27:39 -07:00
Matt Ellis 936cab0c22 Add a version property to checkpoints
This takes the existing `apitype.Checkpoint` type and renames it to
`apitype.CheckpointV1` locking in the shape. In addition, we introduce
a `apitype.VersionedCheckpoint` type, which holds a version number and
a json document representing a checkpoint at that version. Now, when
reading a checkpoint, the CLI can determine if it's in a format it
understands, and fail gracefully if it is not.

While the CLI understands the older checkpoint version, it always
writes the newest version format, meaning that if you manage a
fire-and-forget stack with this version of the CLI, it will be
un-readable by previous versions.

Stacks managed by Pulumi.com are not impacted by this change.

Fixes: #887
2018-03-10 13:03:05 -08:00
Sean Gillespie 703a954839
Improve error messages output by the CLI (#1011)
* Improve error messages output by the CLI

This fixes a couple known issues with the way that we present errors
from the Pulumi CLI:
    1. Any errors from RPC endpoints were bubbling up as they were to
    the top-level, which was unfortunate because they contained
    RPC-specific noise that we don't want to present to the user. This
    commit unwraps errors from resource providers.
    2. The "catastrophic error" message often got printed twice
    3. Fatal errors are often printed twice, because our CLI top-level
    prints out the fatal error that it receives before exiting. A lot of
    the time this error has already been printed.
    4. Errors were prefixed by PU####.

* Feedback: Omit the 'catastrophic' error message and use a less verbose error message as the final error

* Code review feedback: interpretRPCError -> resourceStateAndError

* Code review feedback: deleting some commented-out code, error capitalization

* Cleanup after rebase
2018-03-09 15:43:16 -08:00
Matt Ellis 344d9b4424
Merge pull request #1025 from pulumi/SerializePluginLoads
Serialize plugin loads.
2018-03-09 15:20:42 -08:00
Matt Ellis 96d39b60d1 Filter secrets from Pulumi's outputs
When a stack has secrets, we now take the secret values and construct
a regular expression which is just an alternation of all the secret
values. Then, before pushing any string data into an Event, we run the
regular expression and replace all matches with '[secret]'.

Fixes #747
2018-03-09 13:23:25 -08:00
pat@pulumi.com dc36b9569a Serialize plugin loads.
As it stands, we allow plugin load requests to race. Not only does this
create a situation in which we may load and then immediately throw away
a plugin (potentially leaking its process), it also creates the
possibility of races when reading from/writing to the various plugin
caches. These changes serialize all plugin loads and cache accesses by
running all accesses for a particular host in a single goroutine.

Fixes #1020.
2018-03-09 11:31:02 -08:00
Matt Ellis 731463c282 Have MakeKey fail if namespace contains a colon
This helper method is only really used for testing, but we should not
allow it to create a Key who's namespace has a colon (as ParseKey
would not build something like this).
2018-03-08 11:52:48 -08:00
Matt Ellis 5dfd720bc3 Remove config.AsModuleMember()
This API was introduced to aid the refactoring, but it isn't something
we want to support long term. Remove it and for a few places, push
passing config.Key around more, instead of converting to the old type
eagerly.
2018-03-08 10:52:25 -08:00
Matt Ellis 9f363a1322 Change JSON/YAML representation of config.Key
When serializing config.Key's we now write them as <package>:<name>
instead of <package>:config:<name>. We continue to support reading the
older format for compatability with older files.
2018-03-08 10:52:25 -08:00
Matt Ellis fd84c11ca5 Don't export config.FromModuleMember
config.ParseKey should be used instead.
2018-03-08 10:52:25 -08:00
Matt Ellis 81a273c7bb Change represention of config.Key
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
2018-03-08 10:52:25 -08:00
Matt Ellis 36ab2ce9f2 Add JSON and YAML marhsalling tests for config.{Key,Map}
I'll be changing the structure of the representation of config.Key, so
let's write some tests first to ensure we can continue to treat
everything as JSON and YAML.
2018-03-08 10:52:25 -08:00
Matt Ellis 9ae36e78dd Split value.go into a few files
No code changes, just re-ordering some things into seperate files
which are logically distinct.
2018-03-08 10:52:25 -08:00
Matt Ellis 7c39620e9a Introduce config.Key
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).

Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
2018-03-08 10:52:25 -08:00
pat@pulumi.com 45a4a41e0d Configure resource providers upon load.
As it stands, we only configure those providers for which configuration
is present. This can lead to surprising failure modes if those providers
are then used to create resources. These changes ensure that all
resource providers that are not configured during plan initialization
are configured upon first load.

Fixes #758.
2018-03-06 16:38:53 -08:00
Joe Duffy 09cceb4e9e
Remove a few outdated references (#997) 2018-03-04 13:34:20 -08:00
Matt Ellis 4b850dcb15 Remove pkg/resource/idl
A hold-over from a previous experiment (LumiIDL) which we don't use
anymore. If we decide to bring that back, we can easily restore these
types, but for now, let's just remove this dead code.
2018-02-28 17:41:04 -08:00
Matt Ellis eb1b9d685f Remove pkg/compiler/errors
Most of the errors in this package are holdovers from our previous
syetem where we had our own custom compiler and evaluator and are no
longer needed. The few we still use during plan applicaton (via the
diagnostics system, which is another component from the old system
that we still use) have been promoted into the diag package. Doing so,
allows us to not have to import "github.com/pkg/errors" as "goerr" in
some parts of the engine, a nice cleaup.
2018-02-28 17:41:04 -08:00
joeduffy 2362d45a5c Eliminate type redundancy
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc.  These too had semi-random omissions.

This change merges all of these types into the apitype package.  This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.

The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.

Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also.  This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
2018-02-28 12:44:55 -08:00
Matt Ellis ed7a4d9157 Check plugin cache first; make version mismatch a warning
Previously, we would prefer a plugin on the $PATH which is more or
less always the case for people hacking on `pulumi`. Later, when we
went to check the loaded plugin version matched the one we requested,
we fail.

Now, if we have a version, we'll first consult the local plugin
cache. If that fails, we'll fall back to the $PATH as we used to.

When we are loading a plugin without a version, we continue to use the
one on the $PATH (without testing the cache) on the assumption it is
newer.

In addition, we've turned the "plugin versions are mis-matched" from
an error into a warning. We expect that we'll only ever see this
warning when something strange is going on (since in the normal case,
we'll have found the exact version in the cache) but having it not
hard fail does help in development cases.

Fixes #977
2018-02-26 11:39:50 -08:00
joeduffy d9a143c8a1 Implement the Python langhost RPC server
This change adds a basic Python langhost RPC server.  It's fairly
barebones and merely acts as a jumping off point for the Pulumi engine
to spawn a Python program.  The host is written in Go, in contrast to
implementing the host in Python, and more closely resembles how I
expect the Node.js language host to work once pulumi/pulumi#331 is done.
2018-02-23 19:33:02 -08:00
Sean Gillespie 9757d069ed
Merge pull request #972 from pulumi/swgillespie/dependency-view
Save resource dependency information in the checkpoint file
2018-02-23 11:19:38 -08:00
Sean Gillespie b84320b45e
Code review feedback:
1. Various idiomatic Go and TypeScript fixes
    2. Add an integration test that end-to-end roundtrips dependency
    information for a simple Pulumi program
    3. Add an additional test assert that tests that dependency information
    comes from the language host as expected
2018-02-22 13:33:50 -08:00
Joe Duffy d57a456269
Tolerate missing GetRequiredPlugins RPC method (#973)
This change makes the engine backwards compatible with older
language host binaries, by simply ignoring GetRequiredPlugins
calls when the RPC server has not yet implemented it.  This
is benign, since we will eventually fault plugins in on demand,
although it does mean that commands like `pulumi plugin install`
will become no-ops (which, thankfully, is what we want).
2018-02-22 07:50:37 -08:00
Sean Gillespie ad06e9b0d8
Save resource dependency information in the checkpoint file
This commit does two things:
    1. All dependencies of a resource, both implicit and explicit, are
    communicated directly to the engine when registering a resource. The
    engine keeps track of these dependencies and ultimately serializes
    them out to the checkpoint file upon successful deployment.
    2. Once a successful deployment is done, the new `pulumi stack
    graph` command reads the checkpoint file and outputs the dependency
    information within in the DOT format.

Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
2018-02-21 17:49:09 -08:00
Joe Duffy e7af13e144
Capture plugin names in the manifest (#967)
Previously, the checkpoint manifest contained the full path to a plugin
binary, in places of its friendly name.  Now that we must move to a model
where we install plugins in the PPC based on the manifest contents, we
actually need to store the name, in addition to the version (which is
already there).  We still also capture the path for debugging purposes.
2018-02-21 10:32:31 -08:00
joeduffy 225bfd46b3 Don't block on nil channels
We have had a long-standing bug in here where we waiting on a
stdout channel that never got populated, when the language plugin
fails to load entirely.  This would lead to hung processes.  The
fix is simple: only wait for stdout/stderr channels to drain that
have actually been wired up to enjoy the requisite signaling.
2018-02-19 14:06:15 -08:00
joeduffy 25f5a71568 Add support for project plugins
This adds support for two things:

* Installing all plugins that a project requires with a single command:

    $ pulumi plugin install

* Listing the plugins that this project requires:

    $ pulumi plugin ls --project
    $ pulumi plugin ls -p
2018-02-19 11:24:19 -08:00
joeduffy 548c22d014 Reimplement GetRequiredPlugins in Go
This brings back the Node.js language plugin's GetRequiredPlugins
function, reimplemented in Go now that the language host has been
rewritten from JavaScript.  Fairly rote translation, along with
some random fixes required to get tests passing again.
2018-02-18 08:08:15 -08:00
joeduffy c04341edb2 Consult the program for its list of plugins
This change adds a GetRequiredPlugins RPC method to the language
host, enabling us to query it for its list of plugin requirements.
This is language-specific because it requires looking at the set
of dependencies (e.g., package.json files).

It also adds a call up front during any update/preview operation
to compute the set of plugins and require that they are present.
These plugins are populated in the cache and will be used for all
subsequent plugin-related operations during the engine's activity.

We now cache the language plugins, so that we may load them
eagerly too, which we never did previously due to the fact that
we needed to pass the monitor address at load time.  This was a
bit bizarre anyhow, since it's really the Run RPC function that
needs this information.  So, to enable caching and eager loading
-- which we need in order to invoke GetRequiredPlugins -- the
"phone home" monitor RPC address is passed at Run time.

In a subsequent change, we will switch to faulting in the plugins
that are missing -- rather than erroring -- in addition to
supporting the `pulumi plugin install` CLI command.
2018-02-18 08:08:15 -08:00
joeduffy 5d16fc936a Add workspace.GetPluginPath, and use it
This change introduces a workspace.GetPluginPath function that probes
the central workspace cache of plugins for a matching plugin binary that
matches the desired kind, name, and, optionally, version.  It also permits
overriding this with $PATH for developer scenarios.

The analyzer, language, and resource plugin logic now uses this function
for deciding which binary path to load at runtime.
2018-02-18 08:08:15 -08:00
Joe Duffy 902d646215
Rename package to project (#935)
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project".  This has gotten more confusing over time, now
that we're doing real package management.

Also fixes pulumi/pulumi#426, while in here.
2018-02-14 13:56:16 -08:00
Sean Gillespie 402a599fc7
Don't use shebangs to launch providers and correctly kill child process trees on Unix (#934)
* Don't use shebangs to launch providers and correctly kill child process trees on Unix

* Link to relevant documentation
2018-02-14 13:56:07 -08:00
CyrusNajmabadi 275670692b
Introduce Output<T> and update Resource construction code to properly handle it. (#834)
This PR adds a new formalisms at the Resource layer.  First all inputs to a Resource are typed as ```Input<T>```.  This is either a T, ```Promise<T>``
2018-02-05 14:44:23 -08:00
Pat Gavlin 88a22515df Supply unknown properties to providers during preview.
My previous change to stop supplying unknown properties to providers
broke `pulumi preview` in the case of unknown inputs. This change
restores the previous behavior for previews only; the new unknown-free
behavior remains for applies.

Fixes #790.
2018-01-09 18:41:47 -08:00
pat@pulumi.com f3cb37ef95 Do not expose unknowns to resource providers.
Before these changes, we were inconsistent in our treatment of unknown
property values across the resource provider RPC interface. `Check` and
`Diff` were retaining unknown properties in inputs and outputs;
`Create`, `Update`, and `Delete` were not. This interacted badly with
recent changes to `Check` to return all provider inputs--i.e. not just
defaults--from that method: if an unknown input was provided, it would
be present in the returned inputs, which would eventually confuse the
differ by giving the appearance of changes where none were present.

These changes remove unknowns from the provider interface entirely:
unknown property values are never passed to a provider, and a provider
must never return an unknown property value.

This is the primary piece of the fix for pulumi/pulumi-terraform#93.
2018-01-09 12:21:47 -08:00
pat@pulumi.com c56e716c31 Refactor the engine's entrypoints.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.

These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
2018-01-08 14:15:16 -08:00
Matthew Riley 9e3976513c AssertNoError instead of Assert(err == nil)
This may not be exhaustive, but I replaced all instances I could find.
2018-01-08 13:46:21 -08:00
Joe Duffy d419229301
Add additional linting (#768)
This adds additional linting checks.  Most importantly, it will
check calls to our custom format routines for missing arguments.
2017-12-27 17:10:12 -08:00
Pat Gavlin a7fbdff7ea
Merge pull request #761 from pulumi/UpdateNoMerge
Do not merge old state and inputs for Update.
2017-12-24 14:47:15 -08:00
Pat Gavlin 1bd1eaff50 Do not merge old state and inputs for Update.
This merging causes similar issues to those it did in `Check`, and
differs from the approach we take to `Diff`. This can causes problems
such as an inability to remove properties.
2017-12-22 18:18:14 -08:00
Pat Gavlin e4d9eb6fd3 Support secrets for cloud stacks.
Use the new {en,de}crypt endpoints in the Pulumi.com API to secure
secret config values. The ciphertext for a secret config value is bound
to the stack to which it applies and cannot be shared with other stacks
(e.g. by copy/pasting it around in Pulumi.yaml). All secrets will need
to be encrypted once per target stack.
2017-12-22 07:59:27 -08:00
Joe Duffy bc2cf55463
Implement resource protection (#751)
This change implements resource protection, as per pulumi/pulumi#689.
The overall idea is that a resource can be marked as "protect: true",
which will prevent deletion of that resource for any reason whatsoever
(straight deletion, replacement, etc).  This is expressed in the
program.  To "unprotect" a resource, one must perform an update setting
"protect: false", and then afterwards, they can delete the resource.

For example:

    let res = new MyResource("precious", { .. }, { protect: true });

Afterwards, the resource will display in the CLI with a lock icon, and
any attempts to remove it will fail in the usual ways (in planning or,
worst case, during an actual update).

This was done by adding a new ResourceOptions bag parameter to the
base Resource types.  This is unfortunately a breaking change, but now
is the right time to take this one.  We had been adding new settings
one by one -- like parent and dependsOn -- and this new approach will
set us up to add any number of additional settings down the road,
without needing to worry about breaking anything ever again.

This is related to protected stacks, as described in
pulumi/pulumi-service#399.  Most likely this will serve as a foundational
building block that enables the coarser grained policy management.
2017-12-20 14:31:07 -08:00
Pat Gavlin 9d038c1d88 Only pass old inputs to Check.
We do not need all of the information in the old state for this call, as
outputs will not be read by the provider during validation or defaults
computation.
2017-12-15 11:04:43 -08:00
joeduffy 668e56f2c7 Use old.All() to be consistent with Update
This changes the inputs to Check/Diff to match what we also
pass to the Update function later on (old.All() vs. old.Outputs).
2017-12-15 07:37:15 -08:00
joeduffy 675fe38269 Add more tracing context to RPC marshaling
This change adds a bit more tracing context to RPC marshaling
logging so that it's easier to attribute certain marshaling calls.
Prior to this, we'd just have a flat list of "marshaled property X"
without any information about what the marshaling pertained to.
2017-12-15 07:22:49 -08:00
joeduffy 8492e6100f Pass old *outputs*, not *inputs*, to Check/Diff
This change passes a resource's old output state, so that it contains
everything -- defaults included -- for purposes of the provider's diffing.
Not doing so can lead the provider into thinking some of the requisite
state is missing.
2017-12-15 07:01:11 -08:00
Luke Hoban 6f15fa8ed8
Pass more stack info to ExtraRuntimeValidation (#717)
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.

Note that this will require updating users of the integration test framework.  By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
2017-12-13 16:09:14 -08:00
joeduffy 92ea5b5bdd Add a test case for delete-before-recreate 2017-12-13 10:47:18 -08:00
joeduffy 3a13621c32 Add rudimentary delete-before-create support
This change adds rudimentary delete-before-create support (see
pulumi/pulumi#450).  This cannot possibly be complete until we also
implement pulumi/pulumi#624, becuase we may try to delete a resource
while it still has dependent resources (which almost certainly will
fail).  But until then, we can use this to manually unwedge ourselves
for leaf-node resources that do not support old and new resources
living side-by-side.
2017-12-13 10:47:18 -08:00
Joe Duffy b6a386995a
Set pwd for plugins (#706)
This change just flows the project's "main" directory all the way
through to the plugins, fixing #667.  In that work item, we discussed
alternative approaches, such as rewriting the asset paths, but this
is tricky because it's very tough to do without those absolute paths
somehow ending up in the checkpoint files.  Just launching the
processes with the right pwd is far easier and safer, and it turns
out that, conveniently, we set up the plugin context in exactly the
same place that we read the project information.
2017-12-12 12:31:09 -08:00
Pat Gavlin d0df98e03b Close assets while creating ZIP archives.
This was an unintentional omission from my earlier change that
refactored the archive code.

Fixes pulumi/pulumi-ppc#133.
2017-12-12 10:38:27 -08:00
Joe Duffy e93fe05efe
Reparameterize NewUniqueHex/ID
This change adds a randlen parameter to NewUniqueHex/ID so that the
caller can decide how long of a random string to generate.
2017-12-10 07:44:11 -08:00
Joe Duffy 1681119339
Add a stack output command (#675)
This change adds a `pulumi stack output` command.  When passed no
arguments, it prints all stack output properties, in exactly the
same format as `pulumi stack` does (just without all the other stuff).
More importantly, if you pass a specific output property, a la
`pulumi stack output clusterARN`, just that property will be printed,
in a scriptable-friendly manner.  This will help us automate wiring
multiple layers of stacks together during deployments.

This fixes pulumi/pulumi#659.
2017-12-08 13:14:58 -08:00
Chris Smith 8c3c2c97f5 Add messages to asserts 2017-12-08 10:21:25 -08:00
Joe Duffy 971f6189f2
Fix pending delete replacement failure (#658)
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update).  This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.
2017-12-07 09:44:38 -08:00
Joe Duffy 29c2a2e08f
Elide the root stack in parent URNs
Every single resource has a type prefix of

    pulumi:pulumi:Stack$

which makes URNs quite lengthy without adding any value.  Since
they all have this prefix, adding it doesn't help to disambiguate.

This change skips adding the parent URN part when it is the built-in
automatic stack type name.
2017-12-05 13:41:26 -08:00
Joe Duffy 0e1ca4363a
Bring back stack outputs (#650)
At some point, we fixed a bug in the way state is managed for "same"
steps, which meant that we wouldn't see newly added output properties.
This had the effect that, if you had a stack already stood up, and
updated it to have output properties, we would miss them.  (Stacks
stood up from scratch would still have them.)  This fixes that problem,
in addition to two other things: 1) we need to sort output property
names to ensure a deterministic ordering, and 2) we need to also
unconditionally apply the outputs RPC coming in, to ensure that the
resulting resource always has the correct outputs (so that for example
deleting prior output properties actually deletes them).

Also add some testing for this area to make sure we don't break again.

Fixes pulumi/pulumi#631.
2017-12-05 13:01:54 -08:00
Pat Gavlin 94645c313a
Merge pull request #643 from pulumi/LateDecrypt
Decrypt configuration nearer to its use.
2017-12-04 17:27:59 -08:00
pat@pulumi.com 7810c824d6 Decrypt configuration nearer to its use.
These changes push the `config.{Map,Value}` interfaces further down into
the deployment engine so that configuration can be decrypted nearer to
its use.

This is the first part of the fix for pulumi/pulumi-ppc#112.
2017-12-04 17:10:40 -08:00
CyrusNajmabadi 75ee89f2b1
Always add 8 chars of randomness to URN names we create. Error if that exceeds the max length allowed for that resource. (#500)
* Include parent type in urn to better ensure component URN uniqueness.
2017-12-04 14:50:55 -08:00
Pat Gavlin f848090479 Return all computed inputs from Provider.Check.
As documented in issue #616, the inputs/defaults/outputs model we have
today has fundamental problems. The crux of the issue is that our
current design requires that defaults present in the old state of a
resource are applied to the new inputs for that resource.
Unfortunately, it is not possible for the engine to decide which
defaults remain applicable and which do not; only the provider has that
knowledge.

These changes take a more tactical approach to resolving this issue than
that originally proposed in #616 that avoids breaking compatibility with
existing checkpoints. Rather than treating the Pulumi inputs as the
provider input properties for a resource, these inputs are first
translated by `Check`. In order to accommodate provider defaults that
were chosen for the old resource but should not change for the new,
`Check` now takes the old provider inputs as well as the new Pulumi
inputs. Rather than the Pulumi inputs and provider defaults, the
provider inputs returned by `Check` are recorded in the checkpoint file.

Put simply, these changes remove defaults as a first-class concept
(except inasmuch as is required to retain the ability to read old
checkpoint files) and move the responsibilty for manging and
merging defaults into the provider that supplies them.

Fixes #616.
2017-12-03 09:33:16 -08:00
joeduffy b59b8f2e6e Fix cloud tests 2017-12-03 06:34:06 -08:00
joeduffy 1c4e41b916 Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.

Now whether a stack is local or cloud is inherent to the stack
itself.  If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally.  Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.

For example, to initialize a new cloud stack, simply:

    $ pulumi login
    Logging into Pulumi Cloud: https://pulumi.com/
    Enter Pulumi access token: <enter your token>
    $ pulumi stack init my-cloud-stack

Note that you may log into a specific cloud if you'd like.  For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:

    $ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873

The cloud is now the default.  If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:

    $ pulumi stack init my-faf-stack --local

If you are logged in and run `pulumi`, we tell you as much:

    $ pulumi
    Usage:
      pulumi [command]

    // as before...

    Currently logged into the Pulumi Cloud ☁️
        https://pulumi.com/

And if you list your stacks, we tell you which one is local or not:

    $ pulumi stack ls
    NAME            LAST UPDATE       RESOURCE COUNT   CLOUD URL
    my-cloud-stack  2017-12-01 ...    3                https://pulumi.com/
    my-faf-stack    n/a               0                n/a

And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.

I shall write up more details and make sure to document these changes.

This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends.  The following is the overall resulting package architecture:

* The backend.Backend interface can be implemented to substitute
  a new backend.  This has operations to get and list stacks,
  perform updates, and so on.

* The backend.Stack struct is a wrapper around a stack that has
  or is being manipulated by a Backend.  It resembles our existing
  Stack notions in the engine, but carries additional metadata
  about its source.  Notably, it offers functions that allow
  operations like updating and deleting on the Backend from which
  it came.

* There is very little else in the pkg/backend/ package.

* A new package, pkg/backend/local/, encapsulates all local state
  management for "fire and forget" scenarios.  It simply implements
  the above logic and contains anything specific to the local
  experience.

* A peer package, pkg/backend/cloud/, encapsulates all logic
  required for the cloud experience.  This includes its subpackage
  apitype/ which contains JSON schema descriptions required for
  REST calls against the cloud backend.  It also contains handy
  functions to list which clouds we have authenticated with.

* A subpackage here, pkg/backend/state/, is not a provider at all.
  Instead, it contains all of the state management functions that
  are currently shared between local and cloud backends.  This
  includes configuration logic -- including encryption -- as well
  as logic pertaining to which stacks are known to the workspace.

This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 14:34:42 -08:00
Joe Duffy 16ade183d8
Add a manifest to checkpoint files (#630)
This change adds a new manifest section to the checkpoint files.
The existing time moves into it, and we add to it the version of
the Pulumi CLI that created it, along with the names, types, and
versions of all plugins used to generate the file.  There is a
magic cookie that we also use during verification.

This is to help keep us sane when debugging problems "in the wild,"
and I'm sure we will add more to it over time (checksum, etc).

For example, after an up, you can now see this in `pulumi stack`:

```
Current stack is demo:
    Last updated at 2017-12-01 13:48:49.815740523 -0800 PST
    Pulumi version v0.8.3-79-g1ab99ad
    Plugin pulumi-provider-aws [resource] version v0.8.3-22-g4363e77
    Plugin pulumi-langhost-nodejs [language] version v0.8.3-79-g77bb6b6
    Checkpoint file is /Users/joeduffy/dev/code/src/github.com/pulumi/pulumi-aws/.pulumi/stacks/webserver/demo.json
```

This addresses pulumi/pulumi#628.
2017-12-01 13:50:32 -08:00
Joe Duffy 70c1cdadaf
Mark state snapshots for components (#627)
We need to mark the state snapshots for components, but were skipping this.
I believe this is the root cause for all occurrences of pulumi/pulumi#613.
2017-11-30 16:37:44 -08:00
Joe Duffy 5b57950da6
Add automatic integrity checking (#625)
This change introduces automatic integrity checking for snapshots.
Hopefully this will help us track down what's going on in
pulumi/pulumi#613.  Eventually we probably want to make this opt-in,
or disable it entirely other than for internal Pulumi debugging, but
until we add more complete DAG verification, it's relatively cheap
and is worthwhile to leave on for now.
2017-11-30 11:13:18 -08:00
joeduffy dff4b7d2fb Fix an error variable mistake 2017-11-30 10:45:49 -08:00
Joe Duffy dc8c302d33
Fix replacement ops regression (#620)
The prior change was incorrectly handling snapshotting of replacement
operations.  Further, in hindsight, the older model of having steps
manage their interaction with the snapshot marking was clearer, so
I've essentially brought that back, merging it with the other changes.
2017-11-29 15:05:58 -08:00
joeduffy a4c7c05e27 Simplify RPC changes
This change simplifies the necessary RPC changes for components.
Instead of a Begin/End pair, which complicates the whole system
because now we have the opportunity of a missing End call, we will
simply let RPCs come in that append outputs to existing states.
2017-11-29 12:08:01 -08:00
joeduffy f883d5ff9d Improve some formatting 2017-11-29 10:06:51 -08:00
joeduffy 9174c7ffd3 Fix state snapshotting
We need to invoke the post-step event hook *after* updating the
state snapshots, so that it will write out the updated state.
We also need to re-serialize the snapshot again after we receive
updated output properties, otherwise they could be missing if this
happens to be the last resource (e.g., as in Stacks).
2017-11-29 08:36:04 -08:00
joeduffy 88086816f2 Merge branch 'master' of github.com:pulumi/pulumi into resource_parenting_lite 2017-11-29 08:16:38 -08:00
joeduffy c5b7b6ef11 Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:

* Instead of RegisterResource and CompleteResource, we call these
  BeginRegisterResource and EndRegisterResource, which begins to model
  these as effectively "asynchronous" resource requests.  This should also
  help with parallelism (https://github.com/pulumi/pulumi/issues/106).

* Flip the CLI/engine a little on its head.  Rather than it driving the
  planning and deployment process, we move more to a model where it
  simply observes it.  This is done by implementing an event handler
  interface with three events: OnResourceStepPre, OnResourceStepPost,
  and OnResourceComplete.  The first two are invoked immediately before
  and after any step operation, and the latter is invoked whenever a
  EndRegisterResource comes in.  The reason for the asymmetry here is
  that the checkpointing logic in the deployment engine is largely
  untouched (intentionally, as this is a sensitive part of the system),
  and so the "begin"/"end" nature doesn't flow through faithfully.

* Also make the engine more event-oriented in its terminology and the
  way it handles the incoming BeginRegisterResource and
  EndRegisterResource events from the language host.  This is the first
  step down a long road of incrementally refactoring the engine to work
  this way, a necessary prerequisite for parallelism.
2017-11-29 07:42:14 -08:00
Pat Gavlin 556e51f044 Un-export PropertyValue.Merge. 2017-11-28 13:21:06 -08:00
Pat Gavlin 84a7d4f3e0 PR feedback. 2017-11-28 12:44:49 -08:00
Pat Gavlin f5b35561c6 Recursively merge properties.
When merging inputs and defaults in order to construct the set of inputs
for a call to `Create`, we must recursively merge each property value:
the provided defaults may contain nested values that must be present in
the merged result.
2017-11-28 12:32:37 -08:00
joeduffy 5762f2d0a6 Merge remote-tracking branch 'origin/resource_parenting' into resource_parenting_lite 2017-11-28 11:03:34 -08:00
joeduffy be201739b4 Make some diff formatting changes
* Don't show +s, -s, and ~s deeply.  The intended format here looks
  more like

      + aws:iam/instanceProfile:InstanceProfile (create)
          [urn=urn:pulumi:test::aws/minimal::aws/iam/instanceProfile:InstanceProfile::ip2]
          name: "ip2-079a29f428dc9987"
          path: "/"
          role: "ir-d0a632e3084a0252"

  versus

      + aws:iam/instanceProfile:InstanceProfile (create)
        + [urn=urn:pulumi:test::aws/minimal::aws/iam/instanceProfile:InstanceProfile::ip2]
        + name: "ip2-079a29f428dc9987"
        + path: "/"
        + role: "ir-d0a632e3084a0252"

  This makes it easier to see the resources modified in the output.

* Print adds/deletes during updates as

      - property: "x"
      + property: "y"

  rather than

      ~ property: "x"
      ~ property: "y"

  the latter of which doesn't really tell you what's new/old.

* Show parent indentation on output properties, so they line up correctly.

* Only print stack outputs if not undefined.
2017-11-26 09:39:29 -08:00
joeduffy 86f97de7eb Merge root stack changes with parenting 2017-11-26 08:14:01 -08:00
joeduffy a2ae4accf4 Switch to parent pointers; display components nicely
This change switches from child lists to parent pointers, in the
way resource ancestries are represented.  This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).

This lets us show a more parent/child display in the output when
doing planning and updating.  For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:

    * cloud:timer:Timer: (same)
          [urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
        * cloud:function:Function: (same)
              [urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
            * aws:serverless:Function: (same)
                  [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
                ~ aws:lambda/function:Function: (modify)
                      [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
                      [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
                    - code            : archive(assets:2092f44) {
                        // etc etc etc

Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.

I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).

Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
2017-11-26 08:14:01 -08:00
Pat Gavlin d72b85c90b Add a few gas exceptions.
The first exception relates to how we launch plugins. Plugin paths are
calculated using a well-known set of rules; this makes `gas` suspicious
due to the need to use a variable to store the path of the plugin.

The second and third are in test code and aren't terribly concerning.
The latter exception asks `gas` to ignore the access key we hard-code
into the integration tests for our Pulumi test account.

The fourth exception allows use to use more permissive permissions for
the `.pulumi` directory than `gas` would prefer. We use `755`; `gas`
wants `700` or stricter. `755` is the default for `mkdir` and `.git` and
so seems like a reasonable choice for us.
2017-11-24 16:14:43 -08:00
joeduffy 7e48e8726b Add (back) component outputs
This change adds back component output properties.  Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
2017-11-20 17:38:09 -08:00
joeduffy 86267b86b9 Merge root stack changes with parenting 2017-11-20 10:08:59 -08:00
joeduffy 5dc4b0b75c Switch to parent pointers; display components nicely
This change switches from child lists to parent pointers, in the
way resource ancestries are represented.  This cleans up a fair bit
of the old parenting logic, including all notion of ambient parent
scopes (and will notably address pulumi/pulumi#435).

This lets us show a more parent/child display in the output when
doing planning and updating.  For instance, here is an update of
a lambda's text, which is logically part of a cloud timer:

    * cloud:timer:Timer: (same)
          [urn=urn:pulumi:malta::lm-cloud:☁️timer:Timer::lm-cts-malta-job-CleanSnapshots]
        * cloud:function:Function: (same)
              [urn=urn:pulumi:malta::lm-cloud:☁️function:Function::lm-cts-malta-job-CleanSnapshots]
            * aws:serverless:Function: (same)
                  [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots]
                ~ aws:lambda/function:Function: (modify)
                      [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741]
                      [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots]
                    - code            : archive(assets:2092f44) {
                        // etc etc etc

Note that we still get walls of text, but this will be actually
quite nice when combined with pulumi/pulumi#454.

I've also suppressed printing properties that didn't change during
updates when --detailed was not passed, and also suppressed empty
strings and zero-length arrays (since TF uses these as defaults in
many places and it just makes creation and deletion quite verbose).

Note that this is a far cry from everything we can possibly do
here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417).
But it's a good start towards taming some of our output spew.
2017-11-20 09:07:53 -08:00
joeduffy 8e01135572 Log the project and stack names 2017-11-19 10:16:47 -08:00
Luke Hoban 96e4b74b15
Support for stack outputs (#581)
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.

This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs.  That resources has properties for each top-level export in the Node.js script.

Running `pulumi stack` will display the current value of these outputs.
2017-11-17 15:22:41 -08:00
Joe Duffy df7114aca2
Merge pull request #578 from pulumi/FixSnapshot
Fix plan snapshotting.
2017-11-16 13:11:58 -08:00
Joe Duffy 77460a7dc0
Plumb the project name correctly (#583)
This change fixes getProject to return the project name, as
originally intended.  (One line was missing.)

It also adds an integration test for this.

Fixes pulumi/pulumi#580.
2017-11-16 08:15:56 -08:00
Joe Duffy 98ef0c4bb5
Allow overriding a Pulumi.yaml's entrypoint (#582)
Because the Pulumi.yaml file demarcates the boundary used when
uploading a program to the Pulumi.com service at the moment, we
have trouble when a Pulumi program uses "up and over" references.
For instance, our customer wants to build a Dockerfile located
in some relative path, such as `../../elsewhere/`.

To support this, we will allow the Pulumi.yaml file to live
somewhere other than the main Pulumi entrypoint.  For example,
it can live at the root of the repo, while the Pulumi program
lives in, say, `infra/`:

    Pulumi.yaml:
    name: as-before
    main: infra/

This fixes pulumi/pulumi#575.  Further work can be done here to
provide even more flexibility; see pulumi/pulumi#574.
2017-11-16 07:49:07 -08:00
pat@pulumi.com 1d9fa045cb Fix plan snapshotting.
When producing a snapshot for a plan, we have two resource DAGs. One of
these is the base DAG for the plan; the other is the current DAG for the
plan. Any resource r may be present in both DAGs. In order to produce a
snapshot, we need to merge these DAGs such that all resource
dependencies are correctly preserved. Conceptually, the merge proceeds
as follows:

- Begin with an empty merged DAG.
- For each resource r in the current DAG, insert r and its outgoing
  edges into the merged DAG.
- For each resource r in the base DAG:
    - If r is in the merged DAG, we are done: if the resource is in the
      merged DAG, it must have been in the current DAG, which accurately
      captures its current dependencies.
    - If r is not in the merged DAG, insert it and its outgoing edges
      into the merged DAG.

Physically, however, each DAG is represented as list of resources
without explicit dependency edges. In place of edges, it is assumed that
the list represents a valid topological sort of its source DAG. Thus,
any resource r at index i in a list L must be assumed to be dependent on
all resources in L with index j s.t. j < i. Due to this representation,
we implement the algorithm above as follows to produce a merged list
that represents a valid topological sort of the merged DAG:

- Begin with an empty merged list.
- For each resource r in the current list, append r to the merged list.
  r must be in a correct location in the merged list, as its position
  relative to its assumed dependencies has not changed.
- For each resource r in the base list:
    - If r is in the merged list, we are done by the logic given in the
      original algorithm.
    - If r is not in the merged list, append r to the merged list. r
      must be in a correct location in the merged list:
        - If any of r's dependencies were in the current list, they must
          already be in the merged list and their relative order w.r.t.
          r has not changed.
        - If any of r's dependencies were not in the current list, they
          must already be in the merged list, as they would have been
          appended to the list before r.

Prior to these changes, we had been performing these operations in
reverse order: we would start by appending any resources in the old list
that were not in the new list, then append the whole of the new list.
This caused out-of-order resources when a program that produced pending
deletions failed to run to completion.

Fixes #572.
2017-11-15 16:21:42 -08:00
Pat Gavlin 234f0816e5 Stop formatting output that should be raw.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.

The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.

Fixes #551.
2017-11-14 11:26:41 -08:00
Pat Gavlin 28579eba94
Rework asset identity and exposure of old assets. (#548)
Note: for the purposes of this discussion, archives will be treated as
assets, as their differences are not particularly meaningful.

Currently, the identity of an asset is derived from the hash and the
location of its contents (i.e. two assets are equal iff their contents
have the same hash and the same path/URI/inline value). This means that
changing the source of an asset will cause the engine to detect a
difference in the asset even if the source's contents are identical. At
best, this leads to inefficiencies such as unnecessary updates. This
commit changes asset identity so that it is derived solely from an
asset's hash. The source of an asset's contents is no longer part of
the asset's identity, and need only be provided if the contents
themselves may need to be available (e.g. if a hash does not yet exist
for the asset or if the asset's contents might be needed for an update).

This commit also changes the way old assets are exposed to providers.
Currently, an old asset is exposed as both its hash and its contents.
This allows providers to take a dependency on the contents of an old
asset being available, even though this is not an invariant of the
system. These changes remove the contents of old assets from their
serialized form when they are passed to providers, eliminating the
ability of a provider to take such a dependency. In combination with the
changes to asset identity, this allows a provider to detect changes to
an asset simply by comparing its old and new hashes.

This is half of the fix for [pulumi/pulumi-cloud#158]. The other half
involves changes in [pulumi/pulumi-terraform].
2017-11-12 11:45:13 -08:00
pat@pulumi.com 8c7932c1b5 Fix an archive-related bug.
Properly skip the .pulumi when dealing with directory-backed archives.
2017-11-10 19:56:25 -08:00
Pat Gavlin db2f802d34 Log actual and expected sizes on ErrWriteTooLong.
If a blob's reported size is incorrect, `archiveTar` may attempt to
write more bytes to an entry than it reported in that entry's header.
These changes provide a bit more context with the resulting error as
well as removing an unnecessary `LimitReader`.
2017-11-10 11:46:49 -08:00
Luke Hoban af5298f4aa
Initial work on tracing support (#521)
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.

We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.

The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.

We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.

In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.

We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.

We currently support sending tracing data to a Zipkin-compatible endpoint.  This has been validated with both Zipkin and Jaeger UIs.

We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface.  There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
2017-11-08 17:08:51 -08:00
Pat Gavlin d01465cf6d
Make archive assets stream their contents. (#542)
We currently have a nasty issue with archive assets wherein they read
their entire contents into memory each time they are accessed (e.g. for
hashing or translation). This interacts badly with scenarios that
place large amounts of data in an archive: aside from limiting the size
of an archive the engine can handle, it also bloats the engine's memory
requirements. This appears to have caused issues when running the PPC in
AWS: evidence suggests that the very high peak memory requirements this
approach implies caused high swap traffic that impacted the service's
availability.

In order to fix this issue, these changes move archives onto a
streaming read model. In order to read an archive, a user:
- Opens the archive with `Archive.Open`. This returns an ArchiveReader.
- Iterates over its contents using `ArchiveReader.Next`. Each returned
  blob must be read in full between successive calls to
  `ArchiveReader.Next`. This requirement is essentially forced upon us
  by the streaming nature of TAR archives.
- Closes the ArchiveReader with `ArchiveReader.Close`.

This model does not require that the complete contents of the archive or
any of its constituent files are in memory at any given time.

Fixes #325.
2017-11-08 15:28:41 -08:00
Joe Duffy fbf13ec4d7
Use full state during updates (#526)
In our existing code, we only use the input state for old and new
properties.  This is incorrect and I'm astonished we've been flying
blind for so long here.  Some resources require the output properties
from the prior operation in order to perform updates.  Interestingly,
we did correclty use the full synthesized state during deletes.

I ran into this with the AWS Cloudfront Distribution resource,
which requires the etag from the prior operation in order to
successfully apply any subsequent operations.
2017-11-03 19:45:19 -07:00
Luke Hoban 13b10490c2
Only call Configure on a package once (#520)
We were previously calling configure on each package once per time it was mentioned in the config.  We only need to call it once ever as we pass the full bag of relevent config through on that one call.
2017-11-03 13:52:59 -07:00
Joe Duffy 0290283e6f
Skip unknown properties (#524)
It's legal and possible for undefined properties to show up in
objects, since that's an idiomatic JavaScript way of initializing
missing properties.  Instead of failing for these during deployment,
we should simply skip marshaling them to Terraform and let it do
its thing as usual.  This came up during our customer workload.
2017-11-03 13:40:15 -07:00
joeduffy 5bf8b5cd3b Fix an error message typo 2017-11-03 11:20:33 -07:00
Matt Ellis 67426833a4
Merge pull request #505 from pulumi/FixWindows
Get windows integration tests working again
2017-10-31 00:19:20 -07:00
Matt Ellis fd64125daf Aggregate process termination errors 2017-10-30 23:35:11 -07:00
Matt Ellis 95ee6d85f6 Kill plugin child processes as well on Windows
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process

Fixes #504
2017-10-30 23:22:14 -07:00
joeduffy 7835305b82 Fix where integration tests look for checkpoints 2017-10-27 19:42:17 -07:00
Matt Ellis 3f1197ef84 Move .pulumi to root of a repository
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.

When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.

We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.

When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
2017-10-27 11:46:21 -07:00
Matt Ellis ade366544e Encrypt secrets in Pulumi.yaml
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.

The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).

Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).

In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.

Secure values show up as unecrypted config values inside the language
hosts and providers.
2017-10-24 16:48:12 -07:00
joeduffy 3d3f778c3d Fix asset bugs; write more tests
This change fixes a couple bugs with assets:

* We weren't recursing into subdirectories in the new "path as
  archive" feature, which meant we missed most of the files.

* We need to make paths relative to the root of the archive
  directory itself, otherwise paths end up redundantly including
  the asset's root folder path.

* We need to clean the file paths before adding them to the
  archive asset map, otherwise they are inconsistent between the
  path, tar, tgz, and zip cases.

* Ignore directories when traversing zips, since they aren't
  included in the other formats.

* Tolerate io.EOF errors when reading the ZIP contents into blobs.

* Add test cases for the four different archive kinds.

This fixes pulumi/pulumi-aws#50.
2017-10-24 09:00:11 -07:00
Chris Smith ede1595a6a Add more context information to assert. (#449) 2017-10-24 08:25:39 -07:00
joeduffy c61bce3e41 Permit undefined in more places
The prior code was a little too aggressive in rejected undefined
properties, because it assumed any occurrence indicated a resource
that was unavailable due to planning.  This is a by-produt of our
relatively recent decision to flow undefineds freely during planning.

The problem is, it's entirely legitimate to have undefined values
deep down in JavaScript structures, entirely unrelated to resources
whose property values are unknown due to planning.

This change flows undefined more freely.  There really are no
negative consequences of doing so, and avoids hitting some overly
aggressive assertion failures in some important scenarios.  Ideally
we would have a way to know statically whether something is a resource
property, and tighten up the assertions just to catch possible bugs
in the system, but because this is JavaScript, and all the assertions
are happening at runtime, we simply lack the necessary metadata to do so.
2017-10-23 16:02:28 -07:00
joeduffy d20f043a3e Fix a few SHA1 comment typos (should be SHA256) 2017-10-22 18:30:42 -07:00
Joe Duffy 4a493292b1 Tolerate missing hashes 2017-10-22 15:54:44 -07:00
Joe Duffy 69f7f51375 Many asset improvements
This improves a few things about assets:

* Compute and store hashes as input properties, so that changes on
  disk are recognized and trigger updates (pulumi/pulumi#153).

* Issue explicit and prompt diagnostics when an asset is missing or
  of an unexpected kind, rather than failing late (pulumi/pulumi#156).

* Permit raw directories to be passed as archives, in addition to
  archive formats like tar, zip, etc. (pulumi/pulumi#240).

* Permit not only assets as elements of an archive's member list, but
  also other archives themselves (pulumi/pulumi#280).
2017-10-22 13:39:21 -07:00
Matt Ellis a749ac1102 Use go-yaml directly
Instead of doing the logic to see if a type has YAML tags and then
dispatching based on that to use either the direct go-yaml marshaller
or the one that works in terms of JSON tags, let's just say that we
always add YAML tags as well, and use go-yaml directly.
2017-10-20 14:01:37 -07:00
pat@pulumi.com 20e71fa5c4 Set old.Delete when previewing a CreateReplace step.
This is required to prevent an assertion when skipping a `Delete` step.
2017-10-19 13:08:17 -07:00
Pat Gavlin 9895e8006f Merge pull request #434 from pulumi/PendingDeletes
Track resources that are pending deletion in checkpoints.
2017-10-19 10:57:00 -07:00
pat@pulumi.com 23864b9459 PR feedback. 2017-10-19 10:34:23 -07:00
joeduffy 599ca8ea43 Add accessors to fetch the Pulumi project and stack names
This change adds functions, `pulumi.getProject()` and `pulumi.getStack()`,
to fetch the names of the project and stack, respectively.  These can be
handy in generating names, specializing areas of the code, etc.

This fixes pulumi/pulumi#429.
2017-10-19 08:26:57 -07:00
pat@pulumi.com 6b66437fae Track resources that are pending deletion in checkpoints.
During the course of a `pulumi update`, it is possible for a resource to
become slated for deletion. In the case that this deletion is part of a
replacement, another resource with the same URN as the to-be-deleted
resource will have been created earlier. If the `update` fails after the
replacement resource is created but before the original resource has been
deleted, the snapshot must capture that the original resource still exists
and should be deleted in a future update without losing track of the order
in which the deletion must occur relative to other deletes. Currently, we
are unable to track this information because the our checkpoints require
that no two resources have the same URN.

To fix this, these changes introduce to the update engine the notion of a
resource that is pending deletion and change checkpoint serialization to
use an array of resources rather than a map. The meaning of the former is
straightforward: a resource that is pending deletion should be deleted
during the next update.

This is a fairly major breaking change to our checkpoint files, as the
map of resources is no more. Happily, though, it makes our checkpoint
files a bit more "obvious" to any tooling that might want to grovel
or rewrite them.

Fixes #432, #387.
2017-10-18 17:09:00 -07:00
Pat Gavlin dacff3db48 Merge pull request #425 from pulumi/DrainStreamsPluginLoadFailure
Drain std{out,err} when a plugin fails to load.
2017-10-16 23:32:02 -07:00
pat@pulumi.com 9453f86c2e Implement dynamic resources.
A dynamic resource is a resource whose provider is implemented alongside
the resource itself. This provider may close over and use orther
resources in the implementation of its CRUD operations. The provider
itself must be stateless, as each CRUD operation for a particular
dynamic resource type may use an independent instance of the provider.
Changes to the definition of a resource's provider result in replacement
of the resource itself (rather than a simple update), as this allows the
old provider definition to delete the old resource and the new provider
definition to create an appropriate replacement.
2017-10-16 23:06:53 -07:00
Pat Gavlin 546612a354 Drain std{out,err} when a plugin fails to load.
If a plugin fails to load after we've set up the goroutines that copy
from its std{out,err} streams, then those goroutines can end up writing
to a closed event channel. This change ensures that we properly drain
those streams in this case.
2017-10-16 21:38:11 -07:00
Matt Ellis 996ac8872d Merge pull request #420 from pulumi/rename-env-to-stack
Use `Stack` over `Environment` to describe a deployment target
2017-10-16 14:24:11 -07:00
pat@pulumi.com bdbb1b59d4 Ensure a plugin's std{out,err} streams are drained in Close().
Not doing so can cause panics, as the goroutines we use to copy these
streams can end up writing to a closed channel.
2017-10-16 13:44:37 -07:00
Matt Ellis 22c9e0471c Use Stack over Environment to describe a deployment target
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.

From a user's point of view, there are a few changes:

1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).

On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
2017-10-16 13:04:20 -07:00
joeduffy 301739c6b5 Add auto-parenting
This changes a few things about "components":

* Rename what was previously ExternalResource to CustomResource,
  and all of the related fields and parameters that this implies.
  This just seems like a much nicer and expected name for what
  these represent.  I realize I am stealing a name we had thought
  about using elsewhere, but this seems like an appropriate use.

* Introduce ComponentResource, to make initializing resources
  that merely aggregate other resources easier to do correctly.

* Add a withParent and parentScope concept to Resource, to make
  allocating children less error-prone.  Now there's no need to
  explicitly adopt children as they are allocated; instead, any
  children allocated as part of the withParent callback will
  auto-parent to the resource provided.  This is used by
  ComponentResource's initialization function to make initialization
  easier, including the distinction between inputs and outputs.
2017-10-15 04:38:26 -07:00
joeduffy fbfca58a3f Implement components
This change implements core support for "components" in the Pulumi
Fabric.  This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.

In a nutshell, resources no longer imply external providers.  It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.

For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.

To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.

All resources now have the ability to adopt children.  This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).

Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output.  For instance:

    + aws:serverless:Function: (create)
      [urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
      => urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
      => urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
      => urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda

The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 18:30:59 -07:00
Matt Ellis e7e4e75af3 Don't examine the Checkpoint in the CLI
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
2017-10-09 18:21:55 -07:00
joeduffy 7e30dde8f4 Update plan test to new interface 2017-10-04 08:30:50 -04:00
joeduffy b7576b9b14 Add a notion of stable properties
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change.  As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.

This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name.  Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.

This resolves pulumi/pulumi#330.
2017-10-04 08:22:21 -04:00
pat@pulumi.com 252fd8e6bb Remove deploy.Step.Pre.
As per @joeduffy, this is an artifact of the prior runtime model.
2017-10-03 10:03:05 -07:00
Pat Gavlin ff2a3fa242 Replace Plan.Apply with planResult.Walk. (#383)
`deploy.Plan.Apply` was only consumed by the engine, and seemed to be in
the wrong place given the API exported by the rest of `Plan` (i.e.
`Plan.Start` + `PlanIterator`). Furthermore, we were missing a reasonable
opportunity to share code between `update` and `preview`, both of which
need to walk the plan. These changes move the plan walk into `package engine`
as `planResult.Walk` and replace the `Progress` interface with a new interface,
`StepActions`, which subsumes the functionality of the former and adds support
for implementation-specific step execution. `planResult.Walk` is then
consumed by both `Engine.Deploy` and `Engine.PrintPlan`.
2017-10-02 14:26:51 -07:00
joeduffy ac2dbc80fa Add an Invoke RPC method on ResourceProvider
This change enables us to make progress on exposing data sources
(see pulumi/pulumi-terraform#29).  The idea is to have an Invoke
function that simply takes a function token and arguments, performs
the function lookup and invocation, and then returns a return value.
2017-09-30 14:53:27 -04:00
pat@pulumi.com 28d02e232b Fix #372.
Print "modified" rather than "modifyd". This introduces a new method,
`resource.StepOp.PastTense()`, which returns the past tense description
of the operation.
2017-09-28 09:28:36 -07:00
Joe Duffy b4646db39b Merge branch 'master' into RenameVerbs 2017-09-23 11:31:29 -07:00
joeduffy 141a112950 Improve output formatting
This change improves our output formatting by generally adding
fewer prefixes.  As shown in pulumi/pulumi#359, we were being
excessively verbose in many places, including prefixing every
console.out with "langhost[nodejs].stdout: ", displaying full
stack traces for simple errors like missing configuration, etc.

Overall, this change includes the following:

* Don't prefix stdout and stderr output from the program, other
  than the standard "info:" prefix.  I experimented with various
  schemes here, but they all felt gratuitous.  Simply emitting
  the output seems fine, especially as it's closer to what would
  happen if you just ran the program under node.

* Do NOT make writes to stderr fail the plan/deploy.  Previously
  we assumed that any console.errors, for instance, meant that
  the overall program should fail.  This simply isn't how stderr
  is treated generally and meant you couldn't use certain
  logging techniques and libraries, among other things.

* Do make sure that stderr writes in the program end up going to
  stderr in the Pulumi CLI output, however, so that redirection
  works as it should.  This required a new Infoerr log level.

* Make a small fix to the planning logic so we don't attempt to
  print the summary if an error occurs.

* Finally, add a new error type, RunError, that when thrown and
  uncaught does not result in a full stack trace being printed.
  Anyone can use this, however, we currently use it for config
  errors so that we can terminate with a pretty error message,
  rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 05:20:11 -07:00
pat@pulumi.com 69341fa7c8 push is dead; long live update.
After discussion with Joe and Luke, we've decided to use `update` instead
of `push` as it more intuitively fits the operation being performed.
2017-09-22 17:23:40 -07:00
pat@pulumi.com 597db186ec Renames: plan -> preview, deploy -> push.
Part of #353.

These changes also remove all command aliases from the `pulumi` command.
2017-09-22 15:28:03 -07:00
Joe Duffy f6e694c72b Rename pulumi-fabric to pulumi
This includes a few changes:

* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.

* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.

* The CLI is renamed from lumi to pulumi.
2017-09-21 19:18:21 -07:00
Matt Ellis 25ae463915 Listen only on 127.0.0.1
Instead of binding on 0.0.0.0 (which will listen on every interface)
let's only listen on localhost. On windows, this both makes the
connection Just Work and also prevents the Windows Firewall from
blocking the listen (and displaying UI saying it has blocked an
application and asking if the user should allow it)
2017-09-21 10:56:45 -07:00
joeduffy 22387d24cd Switch to a --parallel=P flag
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism.  We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized.  We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it.  And in any case, the --parallel=P capability will be useful.
2017-09-17 08:10:46 -07:00
joeduffy 087deb7643 Add optional dependsOn to Resource constructors
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources.  We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case.  Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.

This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106.  I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways.  This tends to be the way most tools (like Make) operate.

This fixes pulumi/pulumi-fabric#335.
2017-09-15 16:38:52 -07:00
joeduffy 016adec9f7 Allow unknowns in defaults
The "defaults" may include computed values, so we should pass
AllowUnknowns as true during unmarshaling of Check calls.
2017-09-15 09:56:42 -07:00
joeduffy 5b779798c4 Fix (don't torch) LUMIDL
This change finishes the conversion of LUMIDL over to the new
runtime model, with the appropriate code generation changes.

It turns out the old model actually had a flaw in it anyway that we
simply didn't hit because we hadn't been stressing output properties
nearly as much as the new model does.  This resulted in needing to
plumb the rejection (or allowance) of computed properties more
deeply into the resource property marshaling/unmarshaling logic.

As of these changes, I can run the GitHub provider again locally.

This change fixes pulumi/pulumi-fabric#332.
2017-09-14 16:40:44 -07:00
joeduffy 9c7f6b678c Bring LUMIDL up to code
This gets LUMIDL to generate code in the new way.
2017-09-11 16:58:25 -07:00
joeduffy 8aba3aae12 Upgrade gRPC to 1.6.0; use full addresses
This change upgrades gRPC to 1.6.0 to pick up a few bug fixes.

We also use the full address for gRPC endpoints, including the
interface name, as otherwise we pick the wrong interface on Linux.
2017-09-09 07:37:10 -07:00
joeduffy 67e5750742 Fix a bunch of Linux issues
There's a fair bit of clean up in here, but the meat is:

* Allocate the language runtime gRPC client connection on the
  goroutine that will use it; this eliminates race conditions.

* The biggie: there *appears* to be a bug in gRPC's implementation
  on Linux, where it doesn't implement WaitForReady properly.  The
  behavior I'm observing is that RPC calls will not retry as they
  are supposed to, but will instead spuriously fail during the RPC
  startup.  To work around this, I've added manual retry logic in
  the shared plugin creation function so that we won't even try
  to use the client connection until it is in a well-known state.
  pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
  ideally, removing the work around.

The other minor things are:

* Separate run.js into its own module, so it doesn't include
  index.js and do a bunch of random stuff it shouldn't be doing.

* Allow run.js to be invoked without a --monitor.  This makes
  testing just the run part of invocation easier (including
  config, which turned out to be super useful as I was debugging).

* Tidy up some messages.
2017-09-08 15:11:09 -07:00
joeduffy ebc3bf1dd0 Reap the language host process 2017-09-07 15:03:48 -07:00
joeduffy f2d53459eb Add the notion of stable states
If a resource's planning operation is to do nothing, we can safely
assume that all of its properties are stable.  This can be used during
planning to avoid cascading updates that we know will never happen.
2017-09-05 10:01:00 -07:00
joeduffy f3cf73d790 Change plugin prefixes to "pulumi-" 2017-09-04 11:35:21 -07:00
joeduffy fe66a0eba7 Use the new URN during creates 2017-09-04 11:35:21 -07:00
joeduffy 77bbf443bc Synchronize with the resource channel properly 2017-09-04 11:35:21 -07:00
joeduffy f718ab6501 Add a runtime.Log class
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways.  Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
2017-09-04 11:35:21 -07:00
joeduffy a13a83b067 Pass the monitor address correctly to language plugins 2017-09-04 11:35:21 -07:00
joeduffy 9f160a7f91 Configure providers at well-defined points
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.

In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.

As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package.  This fixes pulumi/pulumi#293.
2017-09-04 11:35:21 -07:00
joeduffy 70d0fac1c0 Simplify resource provider RPC interface
This change simplifies the provider RPC interface slightly:

1) Eliminate Get.  We really don't need it anymore.  There are
   several possibly-interesting scenarios down the road that may
   demand it, but when we get there, we can consider how best to
   bring this back.  Furthermore, the old-style Get remains mostly
   incompatible with Terraform anyway.

2) Pass URNs, not type tokens, across the RPC boundary.  This gives
   the provider access to more interesting information: the type,
   still, but also the name (which is no longer an object property).
2017-09-04 11:35:21 -07:00
joeduffy 590e9e539b Rename Lumi.yaml to Pulumi.yaml
And also eliminate lots of accumulated cruft around "packfiles", etc.
in the workspace code.
2017-09-04 11:35:21 -07:00
joeduffy 9599ea2e55 Get planning engine unit tests running again
We now build and run cleanly locally (for unit tests).  The
integration tests are still on the floor at the moment.
2017-09-04 11:35:21 -07:00
joeduffy f189c40f35 Wire up Lumi to the new runtime strategy
🔥 🔥 🔥  🔥 🔥 🔥

Getting closer on #311.
2017-09-04 11:35:21 -07:00
Luke Hoban 24393cfe7b Read path assets into memory instead of holding open file handles
This helps avoid running into file handle limits when creating archives including thousands of node_modules files.

Tracking a more complete fix through all other codepaths related to assets as part of #325.
2017-08-29 13:33:02 -07:00
joeduffy 627d97d83f Close open Blobs
This change ensures we close all Blobs in the asset/archive logic.
In particular, the archive.Read function returns a map of files to
Blobs and after we are done copying the contents we must ensure
that we invoke Close, otherwise we may leak file handles, sockets,
and so on.  This may or may not be the culprit to the "too many
files open" errors we are hitting while deploying the M5 bits.
2017-08-28 18:52:51 -07:00
joeduffy a626dcf6a3 Prettify the CLI in a few places
This changes a few things in the CLI, mostly just prettying it up:

    * Label all steps more clearly with the kind of step.  Also
      unify the way we present this during planning and deployment.

    * Summarize the changes that *did not* get made just as clearly
      as those that did.  In other words, stuff like this:

        info: 2 resources changed:
            +1 resource created
            -1 resource deleted
            5 resources unchanged

      and

        info: no resources required
            5 resources unchanged

    * Always print output properties when they are pertinent.
      This includes creates, replacements, and updates.

    * Show replacement creates and deletes very distinctly.  The
      create parts show up minty green and the delete parts show up
      rosey red.  These are the "physical" steps, compared to the
      "logical" step of replacement (which remains marigold).

      I still don't love where we are here.  The asymmetry between
      planning and deployment bugs me, and could be surprising.
      ("Hey, my deploy doesn't look like my plan!")  I don't know
      what developers will want to see here and I feel like in
      general we are spewing far too much into the CLI to make it
      even useful for anything but diagnosing failures afterwards.

      I propose that we should do a deep dive on this during the
      CLI epic, pulumi/pulumi-service#2.

This resolves pulumi/pulumi-fabric#305.
2017-08-06 10:05:51 -07:00
joeduffy ff0eb81944 Export urnName constants
This avoids us needing to hard-code the urnName property in
various tools, in case we ever need to change it again down the road.
2017-08-05 08:32:50 -07:00
joeduffy 007c2f216d Rename name property to urnName
At the moment, we permit resources to carry a name, which the
engine uses as part of URN creation.  Unfortunately, the property
"name" has a very high chance of conflicting with meaningful
user-authored properties.  And furthermore this sort of name,
although key to creating URNs which are core to how the overall
system performs deployments and manages resources, are seldom
used programmatically in Lumi programs.  As a result, it's a real
nuisance that we stole the good name.

This change renames that property from "name" to "urnName".  This
not only has a lower likelihood of conflicting, but it also looks
reasonable sitting alongside the "urn" property.  In fact, in some
future universe, after some of the upcoming runtime changes, we
may not even need the name property on the objects whatsoever.

I had originally toyed with the idea of eliminating the Resource
versus NamedResource distinction.  This is certainly simpler and
remains a possibility.  I didn't do that right now, however, because
the flexibility of letting resource providers name resources however
they see fit still seems possibly useful.  For example, we keep
talking about whether functions can be auto-named based on hashes.
Until we've run those conversations to ground, I'd hate to do some
work that just needs to be undone in order to enable a scenario that
has a non-trivial likelihood of us wanting to explore.
2017-08-05 08:20:57 -07:00
joeduffy c0e4bdb03b This shouldn't be here!
😬
2017-08-03 18:09:14 -07:00
joeduffy a160741931 Tolerate computed and output properties
We are now freely flowing computed and output properties across the
RPC boundary with providers.  As such, we need to tolerate them in
a few more places.  Namely, mapping to and from regular non-resource
property values, and also when copying RPC resource state back onto
live runtime objects.
2017-08-03 11:01:38 -07:00
joeduffy 35aa6b7559 Rename pulumi/lumi to pulumi/pulumi-fabric
We are renaming Lumi to Pulumi Fabric.  This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
2017-08-02 09:25:22 -07:00
joeduffy 5fb014e53c Explicitly track default properties
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly.  This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs.  This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.

This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
2017-07-31 18:26:15 -07:00
joeduffy 300af62747 Fix erroneous assert when unmarshaling computed properties 2017-07-31 11:44:34 -07:00
joeduffy 00442b73b4 Alter the way unknown properties are serialized
This change serializes unknown properties anywhere in the entire
property structure, including deeply embedded inside object maps, etc.

This is now done in such a way that we can recover both the computed
nature of the serialized property, along with its expected eventual
type, on the other side of the RPC boundary.

This will let us have perfect fidelity with the new bridge's view on
computed properties, rather than special casing them on "one side".
2017-07-21 14:00:30 -07:00
joeduffy b8be7fa5e6 Add a handy Copy function to PropertyMap 2017-07-21 14:00:30 -07:00
Luke Hoban 9243bd5f3f Fix tests after previous commit 2017-07-21 14:00:30 -07:00
joeduffy 40747a94bf Merge inputs with outputs
For Update and Delete operations, we provided just the input state
for a resource.  This is insufficient, because the provider may need
to depend on output state from the Create or prior Update operations.
This change merges the output atop the input during the step application.
2017-07-21 14:00:30 -07:00
joeduffy 4e02105355 Pass old state to the provider's API 2017-07-21 14:00:30 -07:00
joeduffy ae92e68902 Return state as part of Create and Update¬
As part of the bridge bringup, I've discoverd that the property state
returned from Creates does *not* always equal the state that is then
read from calls to Get.  (I suspect this is a bug and that they should
be equivalent, but I doubt it's fruitfal to try and track down all
occurrences of this; I bet it's widespread).  To cope with this, we will
return state from Create and Update, instead of issuing a call to Get.
This was a design we considered to start with and frankly didn't have
a super strong reason to do it the current way, other than that it seemed
elegant to place all of the Get logic in one place.

Note that providers may choose to return nil, in which case we will read
state from the provider in the usual Get style.
2017-07-21 14:00:29 -07:00
joeduffy ceb32b9e2b Add a sig field to Asset/Archive
This change mirrors the dynamic marshaling structure on the static
definition of the Asset and Archive types.  This ensures that they
marshal correctly even when deeply embedded inside other structures.
2017-07-18 09:30:31 -07:00
joeduffy 91c90cf2a9 Add diffing logic for assets/archives 2017-07-17 12:11:15 -07:00
joeduffy 002618e605 Add some more asset serialization round-tripping tests 2017-07-17 11:30:10 -07:00
joeduffy 4d708c8567 Fix asset diffing
This change brings the same typed serialization we use for RPC
to the serialization of deployments.  This ensures that we get
repeatable diffs from one deployment to the next.
2017-07-17 10:38:57 -07:00
joeduffy bda607abd8 Permit -1 for randlen and maxlen
This allows -1 for randlen and maxlen to use defaults.  The default
behavior is that randlen uses sha1.Size and maxlen is "no max".
2017-07-15 09:59:44 -07:00
joeduffy c61dcb5206 Revert "Rename Lumi resource properties"
This reverts commit c3db70849d.

I've opted to take a new strategy to ensure the bridge properties
don't conflict (with manual renames), similar to the name property.
2017-07-15 09:33:23 -07:00
joeduffy 0f18bb40b2 Tolerate some nils in important places 2017-07-14 18:08:33 -07:00