Commit graph

725 commits

Author SHA1 Message Date
CyrusNajmabadi 1387afec8f
Color 'reads' as cyan so they don't look like 'creates'. (#3236) 2019-09-18 09:49:13 -07:00
CyrusNajmabadi f788eb8fc1
Add support for refreshing specific targets. (#3225) 2019-09-17 18:14:10 -07:00
Pat Gavlin 82204230e1
Improve tracing support. (#3238)
* Fix some tracing issues.

- Add endpoints for `startUpdate` and `postEngineEventsBatch` so that
  spans for these invocations have proper names
- Inject a tracing span when walking a plan so that resource operations
  are properly parented
- When handling gRPC calls, inject a tracing span into the call's
  metadata if no span is already present so that resource monitor and
  engine spans are properly parented
- Do not trace client gRPC invocations of the empty method so that these
  calls (which are used to determine server availability) do not muddy
  the trace. Note that I tried parenting these spans appropriately, but
  doing so broke the trace entirely.

With these changes, the only unparented span in a typical Pulumi
invocation is a single call to `getUser`. This span is unparented
because that call does not have a context available. Plumbing a context
into that particular call is surprisingly tricky, as it is often called
by other context-less functions.

* Make tracing support more flexible.

- Add support for writing trace data to a local file using Appdash
- Add support for viewing Appdash traces via the CLI
2019-09-16 14:16:43 -07:00
Paul Stack 8d1b725840
GH-2319: Increase the grpc.MaxCallRecvMsgSize (#3201)
Fixes: #2319

In #2319, a user is hitting the gRPC limit on the message size the
server can receive when uploading ec2 user-data

This commit doubles the limit that can be sent from `1024*1024*4` to
`1024*1024*8`
2019-09-09 21:31:54 +03:00
Alex Clemmer 99d70e4610 Introduce MarshalOptions.{RejectAsset, RejectArchive}
Not all resource providers support Pulumi's Asset and Archive types. In
particular, the Kubernetes provider should reject any resource
definition that contains either of these types.

This commit will introduce two MarshalOptions that will make it easy for
the Kubernetes provider to guarantee that no properties of this type are
in a resource request, as it's deserializing the request from the
engine.
2019-08-26 15:19:14 -07:00
Pat Gavlin 2455564ddc
Allow IDs to change during import. (#3133)
This is necessary for resources like `aws.ec2.RouteTableAssociation`.

Part of https://github.com/pulumi/pulumi-aws/issues/708.
2019-08-23 15:00:24 -07:00
Pat Gavlin 42fc75fffe
Fail read steps with missing resources. (#3123)
Just what it says on the tin.

Fixes #262.
2019-08-21 10:09:02 -07:00
Pat Gavlin 8745440c1b
Allow users to explicitly disable delete-before-replace. (#3118)
With these changes, a user may explicitly set `deleteBeforeReplace` to
`false` in order to disable DBR behavior for a particular resource. This
is the SDK + CLI escape hatch for cases where the changes in
https://github.com/pulumi/pulumi-terraform/pull/465 cause undesirable
behavior.
2019-08-20 15:51:02 -07:00
Paul Stack 2c0510a23e
Update the JSON representation of customTimeouts in state (#3101)
Omit them when empty and change the casing of Create, Update and
Delete
2019-08-21 01:01:27 +03:00
Matt Ellis 342f8311a1 Fix renaming a freshly created stack using the local backend
Attempting to `pulumi stack rename` a stack which had been created but
never updated, when using the local backend, was broken because
code-paths were not hardened against the snapshot being `nil` (which
is the case for a stack before the initial deployment had been done).

Fixes #2654
2019-08-16 13:39:34 -07:00
Paul Stack f8db8e4209
Allow resource IDs to change on reresh steps (#3087)
* Allow resource IDs to change on reresh steps

This is a requirement for us to be able to move forward with
versions of the Terraform Azurerm provider. In v1.32.1, there was
a state migration that changed the ID format of the azure table
storage resource

We used to have a check in place for old ID being equal to new ID.
This has been changed now and we allow the change of ID to happen
in the RefreshStep

* Update pkg/resource/deploy/step.go

Co-Authored-By: Pat Gavlin <pat@pulumi.com>
2019-08-16 21:04:03 +03:00
Matt Ellis 0bb4e6d70b Respond to PR feedback
Address post commit feedback from Cyrus on
pulumi/pulumi#3071
2019-08-15 12:42:51 -07:00
Matt Ellis 9308246114 Do not taint all stack outputs as secrets if just one is
When using StackReference, if the stack you reference contains any
secret outputs, we have to mark the entire `outputs` member as a
secret output. This is because we only track secretness on a per
`Output<T>` basis.

For `getSecret` and friends, however, we know the name of the output
you are looking up and we can be smarter about if the returned
`Output<T>` should be treated as a secret or not.

This change augments the provider for StackReference such that it also
returns a list of top level stack output names who's values contain
secrets. In the language SDKs, we use this information, when present,
to decide if we should return an `Output<T>` that is marked as a
secret or not. Since the SDK and CLI are independent components, care
is taken to ensure that when the CLI does not return this information,
we behave as we did before (i.e. if any output is a secret, we treat
every output as a secret).

Fixes #2744
2019-08-13 16:11:38 -07:00
Alex Clemmer 6b4832b416 Emit stderr of npm install
If we don't process and report the stderr of `npm install`, the output
is "orphaned" during error condition, and only something like "exit code
1" is reported.
2019-08-13 12:48:16 -07:00
Alex Clemmer ef8cc236c4 Implement --policy-pack flag on up and preview
Fixes pulumi/pulumi-policy#43.
2019-08-12 12:45:48 -07:00
Pat Gavlin 6983e2639b
Fix conversion of empty array properties. (#3047)
Empty `[]interface{}` values were being converted to array property
values with a `nil` element, and empty array property values were being
coverted to `nil` `[]interface{}` values. These changes fix the
converters to return empty but non-nil values in both cases.

This is part of the fix for
https://github.com/pulumi/pulumi-kubernetes/issues/693.
2019-08-07 11:42:40 -07:00
Pat Gavlin 62189e6053
Harden asset and archive deserialization. (#3042)
- Ensure that type assertions are guarded, and that incorrectly-typed
  properties return errors rather than panicking
- Expand the asset/archive tests in the Node SDK to ensure that eventual
  archives and assets serialize and deserialize correctly

Fixes #2836.
Fixes #3016.
2019-08-06 16:32:05 -07:00
Alex Clemmer 6360cba588 Improve "project not found" error messages
Fixes pulumi/pulumi-policy#36.
Fixes pulumi/pulumi-policy#37.
2019-08-05 14:14:20 -07:00
Luke Hoban 6ed4bac5af
Support additional cloud secrets providers (#2994)
Adds support for additional cloud secrets providers (AWS KMS, Azure KeyVault, Google Cloud KMS, and HashiCorp Vault) as the encryption backend for Pulumi secrets. This augments the previous choice between using the app.pulumi.com-managed secrets encryption or a fully-client-side local passphrase encryption.

This is implemented using the Go Cloud Development Kit support for pluggable secrets providers.

Like our cloud storage backend support which also uses Go Cloud Development Kit, this PR also bleeds through to users the URI scheme's that the Go CDK defines for specifying each of secrets providers - like `awskms://alias/LukeTesting?region=us-west-2` or `azurekeyvault://mykeyvaultname.vault.azure.net/keys/mykeyname`.

Also like our cloud storage backend support, this PR doesn't solve for how to configure the cloud provider client used to resolve the URIs above - the standard ambient credentials are used in both cases. Eventually, we will likely need to provide ways for both of these features to be configured independently of each other and of the providers used for resource provisioning.
2019-08-02 16:12:16 -07:00
Chris Smith 17ee050abe
Refactor the way secrets managers are provided (#3001) 2019-08-01 10:33:52 -07:00
Pat Gavlin 67ec74bdc5
Pass ignoreChanges to providers. (#3005)
These changes add support for passing `ignoreChanges` paths to resource
providers. This is intended to accommodate providers that perform diffs
between resource inputs and resource state (e.g. all Terraform-based
providers, the k8s provider when using API server dry-runs). These paths
are specified using the same syntax as the paths used in detailed diffs.

In addition to passing these paths to providers, the existing support
for `ignoreChanges` in inputs has been extended to accept paths rather
than top-level keys. It is an error to specify a path that is missing
one or more component in the old or new inputs.

Fixes #2936, #2663.
2019-07-31 11:39:07 -05:00
Pat Gavlin c6916051f0
Use a bag for misc. resource options in deploytest (#2977)
Most of these options are typically left unset. In order to make it
easier to update the lifecycle test when adding new options, collect
them in a bag s.t. most callsites can go without being updated.
2019-07-25 11:18:40 -07:00
Pat Gavlin fa05e5cb05
Migrate old providers without outputs. (#2973)
If we encounter a provider with old inputs but no old outputs when reading
a checkpoint file, use the old inputs as the old outputs. This handles the
scenario where the CLI is being upgraded from a version that did not
reflect provider inputs to provider outputs, and a provider is being
upgraded from a version that did not implement `DiffConfig` to a version
that does.

Fixes https://github.com/pulumi/pulumi-kubernetes/issues/645.
2019-07-23 13:39:21 -07:00
Alex Clemmer ed5b8437d1 Batch policy violation reporting for pulumi preview
Currently, `pulumi preview` fails immediately when any resource
definition in a Pulumi app is found to be in violation of a resource
policy. But, users would like `preview` to report as many policy
violations as it can before terminating with an error, so that they can
fix many of them before running `preview` again.

This commit will thus change `pulumi preview` to do this sort of
"batching" for policy violations. The engine will attempt to run the
entire preview step, validating every resource definition with the
relevant known resource policies, before finally reporting an error if
any violations are detected.

Fixes pulumi/pulumi-policy#31
2019-07-22 20:42:17 -07:00
Alex Clemmer 716a69ced2 Keep unknowns when marshalling resources in call to Analyze
Fixes pulumi/pulumi-aws#661.
2019-07-19 12:23:36 -07:00
Alex Clemmer 0850c88a97 Address comments 2019-07-16 00:58:33 -07:00
Alex Clemmer 4c069d5cf6 Address lint warnings 2019-07-16 00:58:33 -07:00
Alex Clemmer 9f809b9122 Run required policies as part of all updates 2019-07-16 00:58:33 -07:00
Alex Clemmer 3fc03167c5 Implement cmd/run-policy-pack
This command will cause `pulumi policy publish` to behave in much the
same way `pulumi up` does -- if the policy program is in TypeScript, we
will use ts-node to attempt to compile in-process before executing, and
fall back to plain-old node.

We accomplish this by moving `cmd/run/run.ts` into a generic helper
package, `runtime/run.ts`, which slightly generalizes the use cases
supported (notably, allowing us to exec some program outside of the
context of a Pulumi stack).

This new package is then called by both `cmd/run/index.ts` and
`cmd/run-policy-pack/index.ts`.
2019-07-16 00:58:33 -07:00
Alex Clemmer fc80eaaa3d Implement GetAnalyzerInfo in analyzer plugin 2019-07-16 00:58:33 -07:00
Paul Stack 02ffff8840
Addition of Custom Timeouts (#2885)
* Plumbing the custom timeouts from the engine to the providers

* Plumbing the CustomTimeouts through to the engine and adding test to show this

* Change the provider proto to include individual timeouts

* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface

* Change how the CustomTimeouts are sent across RPC

These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest

```
req=&pulumirpc.RegisterResourceRequest{
           Type:                    "aws:s3/bucket:Bucket",
           Name:                    "my-bucket",
           Parent:                  "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
           Custom:                  true,
           Object:                  &structpb.Struct{},
           Protect:                 false,
           Dependencies:            nil,
           Provider:                "",
           PropertyDependencies:    {},
           DeleteBeforeReplace:     false,
           Version:                 "",
           IgnoreChanges:           nil,
           AcceptSecrets:           true,
           AdditionalSecretOutputs: nil,
           Aliases:                 nil,
           CustomTimeouts:          &pulumirpc.RegisterResourceRequest_CustomTimeouts{
               Create:               300,
               Update:               400,
               Delete:               500,
               XXX_NoUnkeyedLiteral: struct {}{},
               XXX_unrecognized:     nil,
               XXX_sizecache:        0,
           },
           XXX_NoUnkeyedLiteral: struct {}{},
           XXX_unrecognized:     nil,
           XXX_sizecache:        0,
       }
```

* Changing the design to use strings

* CHANGELOG entry to include the CustomTimeouts work

* Changing custom timeouts to be passed around the engine as converted value

We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
2019-07-16 00:26:28 +03:00
Pat Gavlin e1a52693dc
Add support for importing existing resources. (#2893)
A resource can be imported by setting the `import` property in the
resource options bag when instantiating a resource. In order to
successfully import a resource, its desired configuration (i.e. its
inputs) must not differ from its actual configuration (i.e. its state)
as calculated by the resource's provider.

There are a few interesting state transitions hiding here when importing
a resource:
1. No prior resource exists in the checkpoint file. In this case, the
   resource is simply imported.
2. An external resource exists in the checkpoint file. In this case, the
   resource is imported and the old external state is discarded.
3. A non-external resource exists in the checkpoint file and its ID is
   different from the ID to import. In this case, the new resource is
   imported and the old resource is deleted.
4. A non-external resource exists in the checkpoint file, but the ID is
   the same as the ID to import. In this case, the import ID is ignored
   and the resource is treated as it would be in all cases except for
   changes that would replace the resource. In that case, the step
   generator issues an error that indicates that the import ID should be
   removed: were we to move forward with the replace, the new state of
   the stack would fall under case (3), which is almost certainly not
   what the user intends.

Fixes #1662.
2019-07-12 11:12:01 -07:00
Mikhail Shilkov e30e6208a0 Normalize Windows paths for directory archive (#2887)
* Normalize Windows paths for directory archive

* Changelog

* Remove the redundant check
2019-07-02 00:04:24 +03:00
Pat Gavlin 6e5c4a38d8
Defer all diffs to resource providers. (#2849)
Thse changes make a subtle but critical adjustment to the process the
Pulumi engine uses to determine whether or not a difference exists
between a resource's actual and desired states, and adjusts the way this
difference is calculated and displayed accordingly.

Today, the Pulumi engine get the first chance to decide whether or not
there is a difference between a resource's actual and desired states. It
does this by comparing the current set of inputs for a resource (i.e.
the inputs from the running Pulumi program) with the last set of inputs
used to update the resource. If there is no difference between the old
and new inputs, the engine decides that no change is necessary without
consulting the resource's provider. Only if there are changes does the
engine consult the resource's provider for more information about the
difference. This can be problematic for a number of reasons:

- Not all providers do input-input comparison; some do input-state
  comparison
- Not all providers are able to update the last deployed set of inputs
  when performing a refresh
- Some providers--either intentionally or due to bugs--may see changes
  in resources whose inputs have not changed

All of these situations are confusing at the very least, and the first
is problematic with respect to correctness. Furthermore, the display
code only renders diffs it observes rather than rendering the diffs
observed by the provider, which can obscure the actual changes detected
at runtime.

These changes address both of these issues:
- Rather than comparing the current inputs against the last inputs
  before calling a resource provider's Diff function, the engine calls
  the Diff function in all cases.
- Providers may now return a list of properties that differ between the
  requested and actual state and the way in which they differ. This
  information will then be used by the CLI to render the diff
  appropriately. A provider may also indicate that a particular diff is
  between old and new inputs rather than old state and new inputs.

Fixes #2453.
2019-07-01 12:34:19 -07:00
CyrusNajmabadi 7b8421f0b2
Fix crash when there were multiple duplicate aliases to the same resource. (#2865) 2019-06-23 02:16:18 -07:00
Matt Ellis eb3a7d0a7a Fix up some spelling errors
@keen99 pointed out that newer versions of golangci-lint were failing
due to some spelling errors. This change fixes them up.  We have also
now have a work item to track moving to a newer golangci-lint tool in
the future.

Fixes #2841
2019-06-18 15:30:25 -07:00
Alex Clemmer 0fc4bc7885 Remove policy ID from policy API 2019-06-13 17:39:30 -07:00
Alex Clemmer 8b7d329c69 Use Analyzer PB in analyzer code 2019-06-13 16:04:13 -07:00
Alex Clemmer 02788b9b32 Implement listResourceOutputs in the Node.js SDK
This commit will expose the new `Invoke` routine that lists resource
outputs through the Node.js SDK.

This API is implemented via a new API, `EnumerablePromise`, which is a
collection of simple query primitives built onto the `Promise` API. The
query model is lazy and LINQ-like, and generally intended to make
`Promise` simpler to deal with in query scenarios. See #2601 for more
details.

Fixes #2600.
2019-06-03 14:56:49 -07:00
Sean Gillespie 2870518a64 Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.

For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".

Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.

To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 12:16:31 -07:00
Matt Ellis c201d92380 Use server information from NodeJS host for fetching plugins 2019-06-03 09:31:18 -07:00
Matt Ellis 917f3738c5 Add --server to pulumi plugin install
Previously, when the CLI wanted to install a plugin, it used a special
method, `DownloadPlugin` on the `httpstate` backend to actually fetch
the tarball that had the plugin. The reason for this is largely tied
to history, at one point during a closed beta, we required presenting
an API key to download plugins (as a way to enforce folks outside the
beta could not download them) and because of that it was natural to
bake that functionality into the part of the code that interfaced with
the rest of the API from the Pulumi Service.

The downside here is that it means we need to host all the plugins on
`api.pulumi.com` which prevents community folks from being able to
easily write resource providers, since they have to manually manage
the process of downloading a provider to a machine and getting it on
the `$PATH` or putting it in the plugin cache.

To make this easier, we add a `--server` argument you can pass to
`pulumi plugin install` to control the URL that it attempts to fetch
the tarball from. We still have perscriptive guidence on how the
tarball must be
named (`pulumi-[<type>]-[<provider-name>]-vX.Y.Z.tar.gz`) but the base
URL can now be configured.

Folks publishing packages can use install scripts to run `pulumi
plugin install` passing a custom `--server` argument, if needed.

There are two improvements we can make to provide a nicer end to end
story here:

- We can augment the GetRequiredPlugins method on the language
  provider to also return information about an optional server to use
  when downloading the provider.

- We can pass information about a server to download plugins from as
  part of a resource registration or creation of a first class
  provider.

These help out in cases where for one reason or another where `pulumi
plugin install` doesn't get run before an update takes place and would
allow us to either do the right thing ahead of time or provide better
error messages with the correct `--server` argument. But, for now,
this unblocks a majority of the cases we care about and provides a
path forward for folks that want to develop and host their own
resource providers.
2019-06-03 09:31:18 -07:00
Luke Hoban 15e924b5cf
Support aliases for renaming, re-typing, or re-parenting resources (#2774)
Adds a new resource option `aliases` which can be used to rename a resource.  When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource.

There are two key places this change is implemented. 

The first is the step generator in the engine.  When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource.  That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order).  This can allow the resource to be matched as a (potential) update to an existing resource with a different URN.

The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs.  This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases.  It is similar to how many other resource options are "inherited" implicitly from the parent.

Four specific scenarios are explicitly tested as part of this PR:
1. Renaming a resource
2. Adopting a resource into a component (as the owner of both component and consumption codebases)
3. Renaming a component instance (as the owner of the consumption codebase without changes to the component)
4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase)
4. Combining (1) and (3) to make both changes to a resource at the same time
2019-05-31 23:01:01 -07:00
Matt Ellis 9a77d72403 Set Outputs for providers in the state file. (#2793)
We model providers as resources in our state file, but we were
neglecting to set Outputs for these resources.  This was problematic
when we started to try to run DiffConfig, because when diffing a
resource we compare thed new inputs and the old outputs, but the
resource never had any old outputs, so it was impossible for the
provider to see what the old state of the resource was.

To fix this, we now reflect the inputs we use the create the provider
reference as outputs on the resource.
2019-05-31 15:14:42 -07:00
Pat Gavlin 6756c7ccec
Use new.{URN,Type,Provider} in applicable Steps. (#2787)
Just what it says on the tin. These changes are in support of the
aliasing work in #2774.
2019-05-30 17:48:00 -07:00
Matt Ellis e8487ad87f Workaround a bug in the kubernetes provider
The Kubernetes provider wanted to return Unimplemented for both
DiffConfig and CheckConfig. However, due to an interaction between the
package we used to construct the error we are returning and the
package we are using to actually construct the gRPC server for the
provider, we ended up in a place where the provider would actually end
up returning an error with code "Unknown", and the /text/ of the
message included information about it being due to the RPC not being
implemented.

So, when we try to call Diff/Check config on the provider, detect this
case as well and treat messages of this shape as if the provider just
returned "Unimplemented".
2019-05-29 11:53:10 -07:00
Matt Ellis 261f012223 Correctly handle CheckConfig/DiffConfig and dynamic provider
In 3621c01f4b, we implemented
CheckConfig/DiffConfig incorrectly. We should have explicilty added
the handlers (to supress the warnings we were getting) but returned an
error saying the RPC was not implemented.  Instead, we just returned
success but passed back bogus data.  This was "fine" at the time
because nothing called these methods.

Now that we are actually calling them, returning incorrect values
leads to errors in grpc. To deal with this we do two things:

1. Adjust the implementations in the dynamic provider to correctly
return not implemented. This allows us to pick up the default engine
behavior going forward.

2. Add some code in CheckConfig/DiffConfig that handle the gRPC error
that is returned when calling methods on the dynamic provider and fall
back to the legacy behavior. This means updating your CLI will not
cause issues for existing resources where the SDK has not been
updated.
2019-05-23 13:34:47 -07:00
Matt Ellis 8397ae447f Implement DiffConfig/CheckConfig for plugins 2019-05-23 13:34:34 -07:00
Matt Ellis f897bf8b4b Flow allowUnknows for Diff/Check Config
We pass this information for Diff and Check on specific resources, so
we can correctly block unknows from flowing to plugins during applies.
2019-05-23 10:54:18 -07:00
Matt Ellis e574f33fa0 Include URN as an argument in DiffConfig/CheckConfig
For provider plugins, the gRPC interfaces expect that a URN would be
included as part of the DiffConfig/CheckConfig request, which means we
need to flow this value into our Provider interface.

This change does that.
2019-05-23 10:43:22 -07:00
Matt Ellis 61bff0c3a4 Do not parse version from resource providers
Until we can come up with a solution for #2753, just ignore the
version that comes in as part of a resource monitor RPC.
2019-05-21 19:20:18 -07:00
Matt Ellis 31bd463264 Gracefully handle the case where secrets_provider is uninitalized
A customer reported an issue where operations would fail with the
following error:

```
error: could not deserialize deployment: unknown secrets provider type
```

The problem here was the customer's deployment had a
`secrets_provider` section which looked like the following:

```
"secrets_providers": {
    "type": ""
}
```

And so our code to try to construct a secrets manager from this thing
would fail, as our registry does not contain any information about a
provider with an empty type.

We do two things in this change:

1. When serializing a deployment, if there is no secrets manager,
don't even write the `secrets_provider` block. This helps for cases
where we are roundtripping deployments that did not have a provider
configured (i.e. they were older stacks that did not use secrets)

2. When deserializing, if we see an empty secrets provider like the
above, interpret it to mean "this deployment has no secrets". We set
up a decrypter such that if it ends up haiving secrets, we panic
eagerly (since this is a logical bug in our system somewhere).
2019-05-21 17:11:54 -07:00
Matt Ellis 4f693af023 Do not pass arguments as secrets to CheckConfig/Configure
Providers from plugins require that configuration value be
strings. This means if we are passing a secret string to a
provider (for example, trying to configure a kubernetes provider based
on some secret kubeconfig) we need to be careful to remove the
"secretness" before actually making the calls into the provider.

Failure to do this resulted in errors saying that the provider
configuration values had to be strings, and of course, the values
logically where, they were just marked as secret strings

Fixes #2741
2019-05-17 16:42:29 -07:00
Matt Ellis 2cd4409c0d Fix a panic during property diffing
We have to actually return the value we compute instead of just
dropping it on the floor and treating the underlying values as
primitive.

I ran into this during dogfooding, the added test case would
previously panic.
2019-05-15 16:20:25 -07:00
Matt Ellis ccbc84ecc1 Add an additional test case
This was used as a motivating example during an in person discussion
with Luke.
2019-05-15 12:03:48 -07:00
Matt Ellis 4368830448 Rework secret annotation algorithm slightly
We adopt a new algoritm for annotating secrets, which works as
follows:

If the source and destinations are both property maps, annotate their
secrets deeply.

Otherwise, if there is an property in both the input and output arrays
with the same name and the value in the inputs has secrets /anywhere/
in it, mark the output itself a secret.

This means, for example, an array in the inputs with a secret value as
one of the elmenets will mean in the outputs the entire array value is
marked as a secret. This is done because arrays often are treated as
sets by providers and so we really shouldn't consider ordering. It
also means that if a value is added to the array as part of the
operation we still mark the new array as an output even though the
values may not be indentical to the inputs.
2019-05-15 09:33:02 -07:00
Matt Ellis af2a2d0f42 Correctly flow secretness across structured values
For providers which do not natively support secrets (which is all of
them today), we annotate output values coming back from the provider
if there is a coresponding secret input in the inputs we passed in.

This logic was not tearing into rich objects, so if you passed a
secret as a member of an array or object into a resource provider, we
would lose the secretness on the way back.

Because of the interaction with Check (where we call Check and then
take the values returned by the provider as inputs for all calls to
Diff/Update), this would apply not only to the Output values of a
resource but also the Inputs (because the secret metadata would not
flow from the inputs of check to the outputs).

This change augments our logic which transfers secrets metadata from
one property map to another to handle these additional cases.
2019-05-15 09:32:25 -07:00
Matt Ellis f705dde7fb Remove acceptsSecrets from InvokeRequest
In our system, we model secrets as outputs with an additional bit of
metadata that says they are secret. For Read and Register resource
calls, our RPC interface says if the client side of the interface can
handle secrets being returned (i.e. the language SDK knows how to
sniff for the special signiture and resolve the output with the
special bit set).

For Invoke, we have no such model. Instead, we return a `Promise<T>`
where T's shape has just regular property fields.  There's no place
for us to tack the secretness onto, since there are no Outputs.

So, for now, don't even return secret values back across the invoke
channel. We can still take them as arguments (which is good) but we
can't even return secrets as part of invoke calls. This is not ideal,
but given the way we model these sources, there's no way around
this.  Fortunately, the result of these invoke calls are not stored in
the checkpoint and since the type is not Output<T> it will be clear
that the underlying value is just present in plaintext. A user that
wants to pass the result of an invoke into a resource can turn an
existing property into a secret via `pulumi.secret`.
2019-05-10 17:07:52 -07:00
Matt Ellis cb59c21c01 Rename SecretOutputs to AdditionalSecretOutputs
This makes the intention of this field clearer.
2019-05-10 17:07:52 -07:00
Matt Ellis 88012c4d96 Enable "cloud" and "local" secrets managers across the system
We move the implementations of our secrets managers in to
`pkg/secrets` (which is where the base64 one lives) and wire their use
up during deserialization.

It's a little unfortunate that for the passphrase based secrets
manager, we have to require `PULUMI_CONFIG_PASSPHRASE` when
constructing it from state, but we can make more progress with the
changes as they are now, and I think we can come up with some ways to
mitigate this problem a bit (at least make it only a problem for cases
where you are trying to take a stack reference to another stack that
is managed with local encryption).
2019-05-10 17:07:52 -07:00
Matt Ellis db18ee3905 Retain the SecretsManager that was used to deserialize a deployment
We have many cases where we want to do the following:

deployment -> snapshot -> process snapshot -> deployment

We now retain information in the snapshot about the secrets manager
that was used to construct it, so in these round trip cases, we can
re-use the existing manager.
2019-05-10 17:07:52 -07:00
Matt Ellis 480a2f6c9e Augment secret outputs based on per request options 2019-05-10 17:07:52 -07:00
Matt Ellis b606b3091d Allow passing a nil SecretsManager to SerializeDeployment
When nil, it means no information is retained in the deployment about
the manager (as there is none) and any attempt to persist secret
values fails.

This should only be used in cases where the snapshot is known to not
contain secret values.
2019-05-10 17:07:52 -07:00
Matt Ellis 67bb134c28 Don't return serialized outputs from stack.GetRooStacktResource
Half of the call sites didn't care about these values and with the
secrets work the ergonmics of calling this method when it has to
return serialized ouputs isn't great. Move the serialization for this
into the CLI itself, as it was the only place that cared to do
this (so it could display things to end users).
2019-05-10 17:07:52 -07:00
Matt Ellis 5cde8e416a Rename base64sm to b64 2019-05-10 17:07:52 -07:00
Matt Ellis d076bad1a5 Remove Config() from backend.Stack
For cloud backed stacks, this was already returning nil and due to the
fact that we no longer include config in the checkpoint for local
stacks, it was nil there as well.

Removing this helps clean stuff up and is should make some future
refactorings around custom secret managers easier to land.

We can always add it back later if we miss it (and make it actually do
the right thing!)
2019-05-10 17:07:52 -07:00
Matt Ellis cc74ef8471 Encrypt secret values in deployments
When constructing a Deployment (which is a plaintext representation of
a Snapshot), ensure that we encrypt secret values. To do so, we
introduce a new type `secrets.Manager` which is able to encrypt and
decrypt values. In addition, it is able to reflect information about
itself that can be stored in the deployment such that we can
deserialize the deployment into a snapshot (decrypting the values in
the process) without external knowledge about how it was encrypted.

The ability to do this is import for allowing stack references to
work, since two stacks may not use the same manager (or they will use
the same type of manager, but have different state).

The state value is stored in plaintext in the deployment, so it **must
not** contain sensitive data.

A sample manager, which just base64 encodes and decodes strings is
provided, as it useful for testing. We will allow it to be varried
soon.
2019-05-10 17:07:52 -07:00
Matt Ellis 294df77703 Retain secrets for unenlightented providers
When a provider does not natively understand secrets, we need to pass
inputs as raw values, as to not confuse it.

This leads to a not great experience by default, where we pass raw
values to `Check` and then use the results as the inputs to remaining
operations. This means that by default, we don't end up retaining
information about secrets in the checkpoint, since the call to `Check`
erases all of our information about secrets.

To provide a nicer experience we were don't lose information about
secrets even in cases where providers don't natively understand them,
we take property maps produced by the provider and mark any values in
them that are not listed as secret as secret if the coresponding input
was a secret.

This ensures that any secret property values in the inputs are
reflected back into the outputs, even for providers that don't
understand secrets natively.
2019-05-10 17:07:52 -07:00
Matt Ellis 529645194e Track secrets inside the engine
A new `Secret` property value is introduced, and plumbed across the
engine.

- When Unmarshalling properties /from/ RPC calls, we instruct the
  marshaller to retain secrets, since we now understand them in the
  rest of the engine.

- When Marshalling properties /to/ RPC calls, we use or tracked data
  to understand if the other side of the connection can accept
  secrets. If they can, we marshall them in a similar manner to assets
  where we have a special object with a signiture specific for secrets
  and an underlying value (which is the /plaintext/ value). In cases
  where the other end of the connection does not understand secrets,
  we just drop the metadata and marshal the underlying value as we
  normally would.

- Any secrets that are passed across the engine events boundary are
  presently passed as just `[secret]`.

- When persisting secret values as part of a deployment, we use a rich
  object so that we can track the value is a secret, but right now the
  underlying value is not actually encrypted.
2019-05-10 17:07:52 -07:00
Matt Ellis 9623293f64 Implement new RPC endpoints 2019-05-10 17:07:52 -07:00
Alex Clemmer cabf660f16 Formally specify querySource with tests 2019-05-02 18:08:08 -07:00
Alex Clemmer c373927b32 Add nodejs support for query mode
In previous commits, we have changed the language plugin protocol to
allow the host to communicate that the plugin is meant to boot in "query
mode." In nodejs, this involves not doing things like registering the
default stack resource. This commit will implement this functionality.
2019-05-02 18:08:08 -07:00
Alex Clemmer 2c7af058de Expose resource outputs through invoke
This command exposes a new resource `Invoke` operation,
`pulumi:pulumi:readStackResourceOutputs` which retrieves all resource
outputs for some user-specified stack, not including those deleted.

Fixes #2600.
2019-05-02 18:08:08 -07:00
Alex Clemmer ea32fec8f9 Implement query primitives in the engine
`pulumi query` is designed, essentially, as a souped-up `exec`. We
execute a query program, and add a few convenience constructs (e.g., the
default providers that give you access to things like `getStack`).

Early in the design process, we decided to not re-use the `up`/update
path, both to minimize risk to update operations, and to simplify the
implementation.

This commit will add this "parallel query universe" into the engine
package. In particular, this includes:

* `QuerySource`, which executes the language provider running the query
  program, and providing it with some simple constructs, such as the
  default provider, which provides access to `getStack`. This is much
  like a very simplified `EvalSource`, though notably without any of the
  planning/step execution machinery.
* `queryResmon`, which disallows all resource operations, except the
  `Invoke` that retrieves the resource outputs of some stack's last
  snapshot. This is much like a simplified `resmon`, but without any of
  the provider resolution, and without and support for resource
  operations generally.
* Various static functions that pull together miscellaneous things
  needed to execute a query program. Notably, this includes gathering
  language plugins.
2019-05-02 18:08:08 -07:00
Alex Clemmer 1965a38b16 Remove unused property from resmon 2019-05-02 18:08:08 -07:00
Sean Gillespie 2d875e0004
Remove uses of plugins in the snapshot (#2662) 2019-04-23 09:53:44 -07:00
Luke Hoban 0550f71a35
Add an ignoreChanges resource option (#2657)
Fixes #2277.

Adds a new ignoreChanges resource option that allows specifying a list of property names whose values will be ignored during updates. The property values will be used for Create, but will be ignored for purposes of updates, and as a result also cannot trigger replacements.

This is a feature of the Pulumi engine, not of the resource providers, so no new logic is needed in providers to support this feature. Instead, the engine simply replaces the values of input properties in the goal state with old inputs for properties marked as ignoreChanges.

Currently, only top level properties may be specified in ignoreChanges. In the future, this could be extended to support paths to nested properties (including into array elements) with a JSONPath/JMESPath syntax.
2019-04-22 13:54:48 -07:00
Joe Duffy 3b93199f7a Use Outputs instead of merged Inputs+Outputs (#2659)
Fixes #2650.

We have historically relied on merging inputs and outputs in several places in the engine. This used to be necessary, as discussed in #2650 (comment), but our core engine model has moved away from depending on this. However, we still have a couple places we do this merge, and those places have triggered several severe issues recently in subtle cases.

We believe that this merging should no longer be needed for a correct interpretation of the current engine model, and indeed that doing the merge actively violates the contract with providers. In this PR we remove the remaining places where this input + output merge was being done. In all three cases, we use just the Outputs, which for most providers will already include the same values as the inputs - but correctly as determined by the provider itself.
2019-04-22 13:52:36 -07:00
Sean Gillespie bea1bea93f
Load specific provider versions if requested (#2648)
* Load specific provider versions if requested

As part of pulumi/pulumi#2389, we need the ability for language hosts to
tell the engine that a particular resource registration, read, or invoke
needs to use a particular version of a resource provider. This was not
previously possible before; the engine prior to this commit loaded
plugins from a default provider map, which was inferred for every
resource provider based on the contents of a user's package.json, and
was itself prone to bugs.

This PR adds the engine support needed for language hosts to request a
particular version of a provider. If this occurs, the source evaluator
specifically records the intent to load a provider with a given version
and produces a "default" provider registration that requests exactly
that version. This allows the source evaluator to produce multiple
default providers for a signle package, which was previously not
possible.

This is accomplished by having the source evaluator deal in the
"ProviderRequest" type, which is a tuple of version and package. A
request to load a provider whose version matches the package of a
previously loaded provider will re-use the existing default provider. If
the version was not previously loaded, a new default provider is
injected.

* CR Feedback: raise error if semver is invalid

* CR: call String() if you want a hash key

* Update pkg/resource/deploy/providers/provider.go

Co-Authored-By: swgillespie <sean@pulumi.com>
2019-04-17 11:25:02 -07:00
Alex Clemmer fac6944781 Warn instead of error when refresh'd resource is unhealthy
Fixes #2633.

Currently when a user runs `refresh` and a resource is in a state of
error, the `refresh` will fail and the resource state will not be
persisted. This can make it vastly harder to incrementally fix
infrastructure. The issue mentioned above explains more of the
historical context, as well as some specific failure modes.

This commit resolves this issue by causing refresh to *not* report an
error in this case, and instead to simply log a warning that the
`refresh` has recognized that the resource is in an unhealthy state
during state sync.
2019-04-10 16:43:33 -07:00
James Nugent edab10e9c8 Use Go Modules for dependency tracking
This commit switches from dep to Go 1.12 modules for tracking Pulumi
dependencies. Rather than _building_ using Go modules, we instead use the `go
mod vendor` command to populate a vendor tree in the same way as `dep ensure`
was previously doing.

In order to prevent checksum mismatches, it was necessary to also update CI to
use Go 1.12 instead of 1.11 - which also necessitated fixing some linting errors
which appeared with the upgraded golangci-lint for 1.12.
2019-04-10 08:37:51 +04:00
Matt Ellis 2ace3e7b0c Correctly handle FileArchives when the filename contains a dot
Our logic for how we handled `.tar.gz` archives meant that any other
type of file that had a dot in the filename would not be detected
correctly.

Fixes #2589
2019-03-28 13:26:07 -07:00
CyrusNajmabadi 3e3e2cbec7
Revert "Revert "Use result.Result pattern in more places. (#2573)" (#2575)" (#2577)
This reverts commit 4abdc88c2e.
2019-03-21 13:23:46 -07:00
CyrusNajmabadi 4abdc88c2e
Revert "Use result.Result pattern in more places. (#2573)" (#2575)
This reverts commit 99496afcfd.
2019-03-21 00:29:34 -07:00
CyrusNajmabadi 99496afcfd
Use result.Result pattern in more places. (#2573) 2019-03-20 18:51:43 -07:00
CyrusNajmabadi f5e7c5fe97
Use result.Result properly (#2572) 2019-03-20 14:56:12 -07:00
CyrusNajmabadi 02369f9d8a
Allows the nodejs launcher to recognize that certain types of errors were printed, ensuring we don't cascade less relevant messages. (#2554) 2019-03-20 11:54:32 -07:00
CyrusNajmabadi c6d87157d9
Use result.Result in more places. (#2568) 2019-03-19 16:21:50 -07:00
CyrusNajmabadi ecb50b9b85
Use interface for 'result.Result' (#2569) 2019-03-19 12:40:10 -07:00
Sean Gillespie 26cc1085b1
Install missing plugins on startup (#2560)
* Install missing plugins on startup

This commit addresses the problem of missing plugins by scanning the
snapshot and language host on startup for the list of required plugins
and, if there are any plugins that are required but not installed,
installs them. The mechanism by which plugins are installed is exactly
the same as 'pulumi plugin install'.

The installation of missing plugins is best-effort and, if it fails,
will not fail the update.

This commit addresses pulumi/pulumi-azure#200, where users using Pulumi
in CI often found themselves missing plugins.

* Add CHANGELOG

* Skip downloading plugins if no client provided

* Reduce excessive test output

* Update Gopkg.lock

* Update pkg/engine/destroy.go

Co-Authored-By: swgillespie <sean@pulumi.com>

* CR: make pluginSet a newtype

* CR: Assign loop induction var to local var
2019-03-15 15:01:37 -07:00
Pat Gavlin d14f47b162
Elide diffs in internal properties (#2543)
Various providers use properties that begin with "__" to represent
internal metadata that should not be presented to the user. These
changes look for such properties and elide them when displaying diffs.
2019-03-11 18:01:48 -07:00
Sean Gillespie 06d4268137
Improve error message when failing to load plugins (#2542)
This commit re-uses an error reporting mechanism previously used when
the plugin loader fails to locate a plugin that is compatible with the
requested plugin version. In addition to specifying what version we
attempted to load, it also outputs a command that will install the
missing plugin.
2019-03-11 22:17:01 +00:00
Matt Ellis 8042adafe5 Add edit.RenameStack
`edit.RenameStack` walks a Snapshot and rewrites all of the parts
where a stack name is present (URNs, the ID of the top level Stack
resource, providers)
2019-03-11 14:44:15 -07:00
Pat Gavlin 7ebd70a3e6
Refresh inputs (#2531)
These changes take advantage of the newly-added support for returning
inputs from Read to update a resource's inputs as part of a refresh.
As a consequence, the Pulumi engine will now properly detect drift
between the actual state of a resource and the desired state described
by the program and generate appropriate update or replace steps.

As part of these changes, a resource's old inputs are now passed to the
provider when performing a refresh. The provider can take advantage of
this to maintain the accuracy of any additional data or metadata in the
resource's inputs that may need to be updated during the refresh.

This is required for the complete implementation of
https://github.com/pulumi/pulumi-terraform/pull/349. Without access to
the old inputs for a resource, TF-based providers would lose all
information about default population during a refresh.
2019-03-11 13:50:00 -07:00
Pat Gavlin 4b33a45561
Filter diff keys based on provider info (#2526)
If a provider returns information about the top-level properties that
differ, use those keys to filter the diffs that are rendered to the
user.

Fixes #2453.
2019-03-06 16:41:19 -08:00
Luke Hoban b6a9814e67
Better log messages for replaces/changes (#2452)
We previously logged the number of replaces and changes returned from a call to Diff, but not the actual properties that were forcing replace.  Several times we've had to debug issues with unexpected replaces being proposed, and this information is very useful to have access to.

Changes the verbose logging to include the property names for both replaces and changes instead of just the count.
2019-02-15 12:02:03 -08:00
Pat Gavlin 6e90ab0341
Add support for explicit delete-before-replace (#2415)
These changes add a new flag to the various `ResourceOptions` types that
indicates that a resource should be deleted before it is replaced, even
if the provider does not require this behavior. The usual
delete-before-replace cascade semantics apply.

Fixes #1620.
2019-01-31 14:27:53 -08:00
Pat Gavlin 128afe3323
Use "discard" when deleting read resources (#2280)
In general, a "delete" in Pulumi is destroying an actual physical
resource. In the case of a read resource, however, the delete is
merely removing the resource from the stack; the physical resource
is not affected. These changes attempt to clarify this situation by
using the term "discard" rather than "delete".

Fixes #2015.
2019-01-31 13:48:44 -08:00
Pat Gavlin 35c60d61eb
Follow up on #2369 (#2397)
- Add support for per-property dependencies to the Go SDK
- Add tests for first-class secret rejection in the checkpoint and RPC
  layers and language SDKs
2019-01-28 17:38:16 -08:00
Pat Gavlin 1ecdc83a33 Implement more precise delete-before-replace semantics. (#2369)
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.

We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.

This is perhaps clearer when described by example. Consider the
following dependency graph:

  A
__|__
B   C
|  _|_
D  E F

In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.

In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
2019-01-28 09:46:30 -08:00
Pat Gavlin cfe4e127be
Add API types for the V3 checkpoint (#2384)
Resources gain two new fields: `PropertyDependencies` and
`PendingReplacement`. The former maps an input property's name to the
dependencies that may affect the value of that property. The latter is
used to track resources that have been deleted as part of a
delete-before-replace operation but have not yet been recreated.

In addition to the new fields, resource properties may now contain
encrypted first-class secret values. These values are of type `SecretV1`,
where the `Sig` field is set to `resource.SecretSig`.

Finally, the deployment type gains a new field, `SecretsProviders`,
which contains any configuration necessary to handle secrets that may be
present in resource properties.
2019-01-23 13:33:25 -08:00
Matt Ellis a02bfb6469 Include symlink'd regular files in directory archives
When constructing an Archive based off a directory path, we would
ignore any symlinks that we saw while walking the file system
collecting files to include in the archive.

A user reported an issue where they were unable to use the
[sharp](https://www.npmjs.com/package/sharp) library from NPM with a
lambda deployed via Pulumi. The problem was that the library includes
native components, and these native components include a bunch of
`*.so` files. As is common, there's a regular file with a name like
`foo.so.1.2.3` and then symlinks to that file with the names
`foo.so.1.2`, `foo.so.1` and `foo.so`. Consumers of these SOs will
try to load the shorter names, i.e. `foo.so` and expect the symlink to
resolve to the actual library.

When these links are not present, upstack code fails to load.

This changes modifies our logic such that if we have a symlink and it
points to a regular file, we include it in the archive. At this time,
we don't try to add it to the archive as a symlink, instead we just
add it as another copy of the regular file. We could explore trying to
include these things as symlinks in archive formats that allow
them (While zip does support this, I'm less sure doing this for zip
files will be a great idea, given the set of tricks we already play to
ensure our zip files are usable across many cloud vendors serverless
offerings, it feels like throwing symlinks into the mix may end up
causing even more downstream weirdness).

We continue to ignore symlinks which point at directories. In
practice, these seem fairly uncommon and doing so lets us not worry
about trying to deal with ensuring we don't end up chasing our tail in
cases where there are circular references.

While this change is in pulumi/pulumi, the downstream resource
providers will need to update their vendored dependencies in order to
pick this up and have it work end to end.

Fixes #2077
2019-01-18 10:35:36 -08:00
Louis DeJardin f35d4cd017 Small typo in comment
`read` spelled `reead`
2019-01-15 15:11:49 -08:00
Pat Gavlin 24f89e1121
Close plugin context on plan creation failure (#2304)
This ensures that the gRPC server is properly shut down. This fixes an
issue in which a resource plugin that is still configuring could report
log messages to the plugin host, which would in turn attempt to send
diagnostic packets over a closed channel, causing a panic.

Fixes #2170.
2018-12-18 13:25:52 -08:00
CyrusNajmabadi ca8169e344
Use 'output<...>' as our terminology for 'computed' properties. (#2267) 2018-12-02 19:44:50 -08:00
Pat Gavlin ab36b1116f
Handle unconfigured plugins in Diff. (#2238)
After #2088, we began calling `Diff` on providers that are not configured
due to unknown configuration values. This hit an assertion intended to
detect exactly this scenario, which was previously unexpected.

These changes adjust `Diff` to indicate that a Diff is unavailable and
return an error message that describes why. The step generator then
interprets the diff as indicating a normal update and issues the error
message to the diagnostic stream.

Fixes #2223.
2018-11-21 16:53:29 -08:00
Matt Ellis 72d52c6e1f Don't fail on configuration keys like a:config:b:c
Configuration keys are simple namespace/name pairs, delimited by
":". For compatability, we also allow
"<namespace>:config:<name>", but we always record the "nice" name in
`Pulumi.<stack-name>.yaml`.

While `pulumi config` and friends would block setting a key like
`a🅱️c` (where the "name" has a colon in it), it would allow
`a:config:b:c`. However, this would be recorded as `a🅱️c` in
`Pulumi.<stack-name>.yaml`, which meant we'd error when parsing the
configuration file later.

To work around this, disallow ":" in the "name" part of a
configuration key.  With this change the following all work:

```
keyName
my-project:keyName
my-project:config:keyName
```

However, both

`my-project:keyName:subKey`
`my-project:config:keyName:subKey`

are now disallowed.

I considered allowing colons in subkeys, but I think it adds more
confusion (due to the interaction with how we allow you elide the
project name in the default case) than is worthwhile at this point.

Fixes #2171
2018-11-20 14:14:37 -08:00
Pat Gavlin 676adf62b8
Use an explicit address when dialing plugins (#2224)
This is necessary in order for gRPC's proxy support to properly respect
NO_PROXY.

Fixes #2134.
2018-11-19 13:47:39 -08:00
Pat Gavlin bc08574136
Add an API for importing stack outputs (#2180)
These changes add a new resource to the Pulumi SDK,
`pulumi.StackReference`, that represents a reference to another stack.
This resource has an output property, `outputs`, that contains the
complete set of outputs for the referenced stack. The Pulumi account
performing the deployment that creates a `StackReference`  must have
access to the referenced stack or the call will fail.

This resource is implemented by a builtin provider managed by the engine.
This provider will be used for any custom resources and invokes inside
the `pulumi:pulumi` module. Currently this provider supports only the
`pulumi:pulumi:StackReference` resource.

Fixes #109.
2018-11-14 13:33:35 -08:00
Matt Ellis 992b048dbf Adopt golangci-lint and address issues
We run the same suite of changes that we did on gometalinter. This
ended up catching a few new issues, some of which were addressed and
some of which were baselined.
2018-11-08 14:11:47 -08:00
Joe Duffy 9aedb234af
Tidy up some data structures (#2135)
In preparation for some workspace restructuring, I decided to scratch a
few itches of my own in the code:

* Change project's RuntimeInfo field to just Runtime, to match the
  serialized name in JSON/YAML.

* Eliminate the no-longer-used Context and NoDefaultIgnores fields on
  project, and all of the associated legacy PPC-related code.

* Eliminate the no-longer-used IgnoreFile constant.

* Remove a bunch of "// nolint: lll" annotations, and simply format
  the structures with comments on dedicated lines, to avoid overly
  lengthy lines and lint suppressions.

* Mark Dependencies and InitErrors as `omitempty` in the JSON
  serialization directives for CheckpointV2 files. This was done for
  the YAML directives, but (presumably accidentally) omitted for JSON.
2018-11-01 08:28:11 -07:00
Pat Gavlin b748935753
Do not assert on duplicate resources. (#2127)
Just what it says on the tin.
2018-10-31 10:33:00 -07:00
James Nugent 59a8a7fbfe Add Go 1.10+ versions of archive hashes in tests
Go 1.10 made some breaking changes to the headers in archive/tar [1] and
archive/zip [2], breaking the expected values in tests. In order to keep
tests passing with both, wherever a hardcoded hash is expect we switch
on `runtime.Version()` to select whether we want the Go 1.9 (currently
supported Go version) or later version of the hash.

Eventually these switches should be removed in favour of using the later
version only, so they are liberally commented to explain the reasoning.

[1]: https://golang.org/doc/go1.10#archive/tar
[2]: https://golang.org/doc/go1.10#archive/zip
2018-10-30 21:59:36 -05:00
Sean Gillespie ca540cc736 Use math.MaxInt32 to signal unbounded parallelism
Downlevel versions of the Pulumi Node SDK assumed that a parallelism
level of zero implied serial execution, which current CLIs use to signal
unbounded parallelism. This commit works around the downlevel issue by
using math.MaxInt32 to signal unbounded parallelism.
2018-10-29 12:27:03 -07:00
Pat Gavlin 72a6ed320c
Do not propose replacement of providers in preview. (#2088)
Providers with unknown properties are currently considered to require
replacement. This was intended to indicate that we could not be sure
whether or not replacement was reqiuired. Unfortunately, this was not a
good user experience, as replacement would never be required at runtime.
This caused quite a bit of confusion--never proposing replacement seems
to be the better option.
2018-10-23 10:23:28 -07:00
Pat Gavlin f465fc0a48
Reorder an error check in the provider registry. (#2078)
The provider registry was checking for a `nil` provider instance before
checking for a non-nil error. This caused the CLI to fail to report
important errors during the plugin load process (e.g. invalid checkpoint
errors) and instead report a failure to find a matching plugin.
2018-10-19 17:22:50 -07:00
Sean Gillespie 3e9b210edd
Default to unbounded parallelism (#2065)
Some providers (namely Kubernetes) require unbounded parallelism in
order to function correctly. This commit enables the engine to operate
in a mode with unbounded parallelism and switches to that mode by
default.
2018-10-17 15:33:26 -07:00
Sean Gillespie 730a929c2b
Add new 'pulumi state' command for editing state (#2024)
* Add new 'pulumi state' command for editing state

This commit adds 'pulumi state unprotect' and 'pulumi state delete', two
commands that can be used to unprotect and delete resources from a
stack's state, respectively.

* Simplify LocateResource

* CR: Print yellow 'warning' before editing state

* Lots of CR feedback

* CR: Only delete protected resources when asked with --force
2018-10-15 09:52:55 -07:00
Sean Gillespie ae87c469c5
Improve error message for missing provider plugins (#2040)
If a version is available, the returned error now includes the version
that was searched for and a command to install the missing plugin.
2018-10-10 15:18:41 -07:00
Pat Gavlin 74df0e67db
Allow previews when operations are pending. (#1999)
The preview will proceed as if the operations had not been issued (i.e.
we will not speculate on a new state for the stack). This is consistent
with our behavior prior to the changes that added pending operations to
the checkpoint.
2018-10-01 09:48:48 -07:00
Sean Gillespie ed0353e251
Process deletions conservatively in parallel (#1963)
* Process deletions conservatively in parallel

This commit allows the engine to conservatively delete resources in
parallel when it is sure that it is legal to do so. In the absence of a
true data-flow oriented step scheduler, this approach provides a
significant improvement over the existing serial deletion mechanism.

Instead of processing deletes serially, this commit will partition the
set of condemned resources into sets of resources that are known to be
legally deletable in parallel. The step executor will then execute those
independent lists of steps one-by-one until all steps are complete.

* CR: Make ResourceSet a normal map

* Only use the dependency graph if we can trust it

* Reverse polarity of pendingDeletesAreReplaces

* CR: un-export a few types

* CR: simplify control flow in step generator when scheduling

* CR: parents are dependencies, fix loop index

* CR: Remove ParentOf, add new test for parent dependencies
2018-09-27 15:49:08 -07:00
joeduffy 3468393490 Make a smattering of CLI UX improvements
Since I was digging around over the weekend after the change to move
away from light black, and the impact it had on less important
information showing more prominently than it used to, I took a step
back and did a deeper tidying up of things. Another side goal of this
exercise was to be a little more respectful of terminal width; when
we could say things with fewer words, I did so.

* Stylize the preview/update summary differently, so that it stands
  out as a section. Also highlight the total changes with bold -- it
  turns out this has a similar effect to the bright white colorization,
  just without the negative effects on e.g. white terminals.

* Eliminate some verbosity in the phrasing of change summaries.

* Make all heading sections stylized consistently. This includes
  the color (bright magenta) and the vertical spacing (always a newline
  separating headings). We were previously inconsistent on this (e.g.,
  outputs were under "---outputs---"). Now   the headings are:
  Previewing (etc), Diagnostics, Outputs, Resources, Duration, and Permalink.

* Fix an issue where we'd parent things to "global" until the stack
  object later showed up. Now we'll simply mock up a stack resource.

* Don't show messages like "no change" or "unchanged". Prior to the
  light black removal, these faded into the background of the terminal.
  Now they just clutter up the display. Similar to the elision of "*"
  for OpSames in a prior commit, just leave these out. Now anything
  that's written is actually a meaningful status for the user to note.

* Don't show the "3 info messages," etc. summaries in the Info column
  while an update is ongoing. Instead, just show the latest line. This
  is more respectful of width -- I often find that the important
  messages scroll off the right of my screen before this change.

    For discussion:

        - I actually wonder if we should eliminate the summary
          altogether and always just show the latest line. Or even
          blank it out. The summary feels better suited for the
          Diagnostics section, and the Status concisely tells us
          how a resource's update ended up (failed, succeeded, etc).

        - Similarly, I question the idea of showing only the "worst"
          message. I'd vote for always showing the latest, and again
          leaving it to the Status column for concisely telling the
          user about the final state a resource ended up in.

* Stop prepending "info: " to every stdout/stderr message. It adds
  no value, clutters up the display, and worsens horizontal usage.

* Lessen the verbosity of update headline messages, so we now instead
  of e.g. "Previewing update of stack 'x':", we just say
  "Previewing update (x):".

* Eliminate vertical whitespace in the Diagnostics section. Every
  independent console.out previously was separated by an entire newline,
  which made the section look cluttered to my eyes. These are just
  streams of logs, there's no reason for the extra newlines.

* Colorize the resource headers in the Diagnostic section light blue.

Note that this will change various test baselines, which I will
update next. I didn't want those in the same commit.
2018-09-24 08:43:46 -07:00
joeduffy be9ead855d Eliminate the same prefix
Recently, we eliminated bright black text, which IMHO makes the
"same" lines really stand out more than we want them to. This is
partly just due to the heavyweight nature of the "*" character,
which we precede every line with. This has the effect of making it
toughter to scan the update to see what's going to happen. The goal
of SpecUnimportant (bright black) was that we wanted to draw less
attention to certain elements of the CLI text -- and have them fade
into the background (apparently it was too successful at this ;-))

So, this change eliminates the "*" prefix for same operations
altogether. It reads better to my eyes and keeps the original intent.
2018-09-22 13:34:43 -07:00
Pat Gavlin 8f55a379f6
Return errors for unknown/invalid path archives. (#1970)
- Attempting to read an archive with an unknown type now returns an
  error
- Attempting to read a path archive that is neither a known archive
  format or a directory now returns an error

Fixes #1529.
Fixes #1953.
2018-09-21 11:53:39 -07:00
Sean Gillespie 2d4a3f7a6a
Move management of root resource state to engine (#1944)
* Protobuf changes

* Move management of root resource state to engine

This commit fixes a persistent side-by-side issue in the NodeJS SDK by
moving the management of root resource state to the engine. Doing so
adds two new endpoints to the Engine gRPC service: 1) GetRootResource
and 2) SetRootResource, which get and set the root resource
respectively.

* Rebase against master, regenerate proto
2018-09-18 11:47:34 -07:00
Sean Gillespie a35aba137b
Retire pending deletions at start of plan (#1886)
* Retire pending deletions at start of plan

Instead of letting pending deletions pile up to be retired at the end of
a plan, this commit eagerly disposes of any pending deletions that were
pending at the end of the previous plan. This is a nice usability win
and also reclaims an invariant that at most one resource with a given
URN is live and at most one is pending deletion at any point in time.

* Rebase against master

* Fix a test issue arising from shared snapshots

* CR feedback

* plan -> replacement

* Use ephemeral statuses to communicate deletions
2018-09-10 16:48:14 -07:00
Pat Gavlin 4a550e308f
Fix provider cancellation. (#1914)
We signal provider cancellation by hangning a goroutine off of the plan
executor's parent context. To ensure clean shutdown, this goroutine also
listens on a channel that closes once the plan has finished executing.
Unfortunately, we were closing this channel too early, and the close was
racing with the cancellation signal. These changes ensure that the
channel closes after the plan has fully completed.

Fixes #1906.
Fixes pulumi/pulumi-kubernetes#185.
2018-09-10 15:18:25 -07:00
Sean Gillespie 679f55c355
Validate type tokens before using them (#1904)
* Validate type tokens before using them

When registering or reading a resource, we take the type token given to
us from the language host and assume that it's valid, which resulted in
assertion failures in various places in the engine. This commit
validates the format of type tokens given to us from the language host
and issues an appropriate error if it's not valid.

Along the way, this commit also improves the way that fatal exceptions
are rendered in the Node language host.

* Pre-allocate an exception for ReadResource

* Fix integration test

* CR Feedback

This commit is a lower-impact change that fixes the bugs associated with
invalid types on component resources and only checks that a type is
valid on custom resources.

* CR Take 2: Fix up IsProviderType instead of fixing call sites

* Please gometalinter
2018-09-07 15:19:18 -07:00
Sean Gillespie ca58b8117f
Clarify control flow in step generator (#1843)
* Introduce Result type to engine

The Result type can be used to signal the failure of a computation due
to both internal and non-internal reasons. If a computation failed due
to an internal error, the Result type carries that error with it and
provides it when the 'Error' method on a Result is called. If a
computation failed gracefully, but wished to bail instead of continue a
doomed plan, the 'Error' method provides a value of null.

* CR feedback
2018-09-05 15:08:09 -07:00
Pat Gavlin df1a5e653d
Fail refreshes with init errors. (#1882)
And ensure that refreshes continue on errors.

Fixes #1881.
2018-09-05 14:00:28 -07:00
Alex Clemmer dea68b8b37 Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.

In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.

The original commit message in that PR was:

> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 15:56:53 -07:00
Alex Clemmer 9e58fd1aaa Revert "Plumb LogRequest.IsStatus through the logging subsystem"
This reverts commit 3066cbcbd7.
2018-08-31 15:56:53 -07:00
Alex Clemmer 3066cbcbd7 Plumb LogRequest.IsStatus through the logging subsystem 2018-08-30 17:17:20 -07:00
Pat Gavlin 0634c1852a
Fix some potential bugs in the refresher. (#1845)
- Create all refresh steps before issuing any. This is important as the
  state update loop expects all steps to exist.
- Check for cancellation later in the refresher.

This also fixes races in the SnapshotManager and the test journal that
could cause panics during cancellation.
2018-08-29 21:00:05 -07:00
Alex Clemmer 2fa98a8dad Generate empty update steps for partial failures
This commit will greatly improve the experience of dealing with partial
failures by simply re-trying to initialize the relevant resources on
every subsequent `pulumi up`, instead of printing a list of reasons the
resource had previously failed to initialize.

As motivation, consider our behavior in the following common, painful
scenario:

  * The user creates a `Service` and a `Deployment`.
  * The `Pod`s in the `Deployment` fail to become live. This causes the
    `Service` to fail, since it does not target any live `Pod`s.
  * The user fixes the `Deployment`. A run of `pulumi up` sees the
    `Pod`s successfully initialize.
  * Users will expect that the `Service` is now in a state of success,
    as the `Pod`s it targets are alive. But, because we don't update the
    `Service` by default, it perpetually exists in a state of error.
  * The user is now required to change some trivial feature of the
    `Service` just to trigger an update, so that we can see it succeed.

There are many situations like this. Another very common one is waiting
for test `Pod`s that are meant to successfully complete when some object
becomes live.

By triggering an empty update step for all resources that have any
initialization errors, we avoid all problems like this.

This commit will implement this empty-update semantics for partial
failures, as well as fix the display UX to correctly render the diff in
these cases.
2018-08-28 18:00:35 -07:00
Pat Gavlin 73f4f2c464
Reimplement refresh. (#1814)
Replace the Source-based implementation of refresh with a phase that
runs as the first part of plan execution and rewrites the snapshot in-memory.

In order to fit neatly within the existing framework for resource operations,
these changes introduce a new kind of step, RefreshStep, to represent
refreshes. RefreshSteps operate similar to ReadSteps but do not imply that
the resource being read is not managed by Pulumi.

In addition to the refresh reimplementation, these changes incorporate those
from #1394 to run refresh in the integration test framework.

Fixes #1598.
Fixes pulumi/pulumi-terraform#165.
Contributes to #1449.
2018-08-22 17:52:46 -07:00
Pat Gavlin 91e20289a8
Remove deploy.PlanSummary. (#1809)
Nothing actually uses this information.
2018-08-21 19:33:59 -07:00
Pat Gavlin 734f7beba7
Refactor plan execution in preparation for refresh. (#1801)
These changes simplify a couple aspects of plan execution in the hopes of
clarifying some responsibilities and preparing the code for changes to the
implementation of refresh.

1. All aspects of plan execution are now managed by the plan executor,
   which is no longer exported. Instead, it is abstracted behind
   `Plan.Execute`.
2. The plan executor's error-handling and reporting have been unified
   and simplified somewhat.
2018-08-21 14:05:00 -07:00
Sean Gillespie d1524e1081
Log errors coming from the language host (#1780)
* Log errors coming from the language host

Similar to pulumi/pulumi#1762, fixes pulumi/pulumi#1775. The language
host can fail without issuing any diagnostics and it is very unclear
what happens if the engine does not log the error.

* CR feedback
2018-08-20 16:41:07 -07:00
Matt Ellis acf0fb278a Fix wierd interactions due to Cobra and glog
The glog package force the use of golang's underyling flag package,
which Cobra does not use. To work around this, we had a complicated
dance around defining flags in multiple places, calling flag.Parse
explicitly and then stomping values in the flag package with values we
got from Cobra.

Because we ended up parsing parts of the command line twice, each with
a different set of semantics, we ended up with bad UX in some
cases. For example:

`$ pulumi -v=10 --logflow update`

Would fail with an error message that looked nothing like normal CLI
errors, where as:

`$ pulumi -v=10 update --logflow`

Would behave as you expect. To address this, we now do two things:

- We never call flag.Parse() anymore. Wacking the flags with values we
  got from Cobra is sufficent for what we care about.

- We use a forked copy of glog which does not complain when
  flag.Parse() is not called before logging.

Fixes #301
Fixes #710
Fixes #968
2018-08-20 14:08:40 -07:00
CyrusNajmabadi 89d5dd004e
Do not add the same file multiple times to a zip or tar file. (#1783) 2018-08-15 22:44:55 -07:00
Sean Gillespie d83774ca96
Fix two issues that can cause hangs during refresh (#1770)
1. 'readID' was never assigned to and was always the default value,
leading the refresh source to believe a resource was deleted
2. The refresh source could hang when a resource is deleted.
2018-08-13 21:43:10 -07:00
Sean Gillespie d9c56eab23
Emit diagnostics for errors coming from step generation (#1766)
The plan executor assumed that the step generator was responsible for
logging its own diagnostics, which it sort-of is but also doesn't log a
majority of the diagnositcs that come out of it. This commit logs all
errors coming out of step generation so that we don't unintentionally
drop errors.
2018-08-13 13:41:01 -07:00
Sean Gillespie 491bcdc602
Add a list of in-flight operations to the deployment (#1759)
* Add a list of in-flight operations to the deployment

This commit augments 'DeploymentV2' with a list of operations that are
currently in flight. This information is used by the engine to keep
track of whether or not a particular deployment is in a valid state.

The SnapshotManager is responsible for inserting and removing operations
from the in-flight operation list. When the engine registers an intent
to perform an operation, SnapshotManager inserts an Operation into this
list and saves it to the snapshot. When an operation completes, the
SnapshotManager removes it from the snapshot. From this, the engine can
infer that if it ever sees a deployment with pending operations, the
Pulumi CLI must have crashed or otherwise abnormally terminated before
seeing whether or not an operation completed successfully.

To remedy this state, this commit also adds code to 'pulumi stack
import' that clears all pending operations from a deployment, as well as
code to plan generation that will reject any deployments that have
pending operations present.

At the CLI level, if we see that we are in a state where pending
operations were in-flight when the engine died, we'll issue a
human-friendly error message that indicates which resources are in a bad
state and how to recover their stack.

* CR: Multi-line string literals, renaming in-flight -> pending

* CR: Add enum to apitype for operation type, also name status -> type for clarity

* Fix the yaml type

* Fix missed renames

* Add implementation for lifecycle_test.go

* Rebase against master
2018-08-10 21:39:59 -07:00
Alex Clemmer a172f1a048 Implement partial Read
Some time ago, we introduced the concept of the initialization error to
Pulumi (i.e., an error where the resource was successfully created but
failed to fully initialize). This was originally implemented in `Create`
and `Update`  methods of the resource provider interface; when we
detected an initialization failure, we'd pack the live version of the
object into the error, and return that to the engine.

Omitted from this initial implementation was a similar semantics for
`Read`. There are many implications of this, but one of them is that a
`pulumi refresh` will erase any initialization errors that had
previously been observed, even if the initialization errors still exist
in the resource.

This commit will introduce the initialization error semantics to `Read`,
fixing this issue.
2018-08-10 15:10:14 -07:00
Sean Gillespie 623e5f1be2
Emit reads for external resources when refreshing (#1749)
* Emit reads for external resources when refreshing

Fixes pulumi/pulumi#1744. This commit educates the refresh source about
external resources. If a refresh source encounters a resource with the
External bit set, it'll send a Read event to the engine and the engine
will process it accordingly.

* CR: save last event channel instead of last event, style fixes
2018-08-09 12:45:39 -07:00
Sean Gillespie 9a5f4044fa
Serialize SourceEvents coming from the refresh source (#1725)
* Serialize SourceEvents coming from the refresh source

The engine requires that a source event coming from a source be "ready
to execute" at the moment that it is sent to the engine. Since the
refresh source sent all goal states eagerly through its source iterator,
the engine assumed that it was legal to execute them all in parallel and
did so. This is a problem for the snapshot, since the snapshot expects
to be in an order that is a legal topological ordering of the dependency
DAG.

This PR fixes the issue by sending refresh source events one-at-a-time
through the refresh source iterator, only unblocking to send the next
step as soon as the previous step completes.

* Fix deadlock in refresh test

* Fix an issue where the engine "completed" steps too early

By signalling that a step is done before committing the step's results
to the snapshot, the engine was left with a race where dependent
resources could find themselves completely executed and committed before
a resource that they depend on has been committed.

Fixes pulumi/pulumi#1726

* Fix an issue with Replace steps at the end of a plan

If the last step that was executed successfully was a Replace, we could
end up in a situation where we unintentionally left the snapshot
invalid.

* Add a test

* CR: pass context.Context as first parameter to Iterate

* CR: null->nil
2018-08-08 13:45:48 -07:00
Pat Gavlin 5513f08669
Do not call Read in read steps with unknown IDs. (#1734)
This is consistent with the behavior prior to the introduction of Read
steps. In order to avoid a breaking change we must do this check in the
engine itself, which causes a bit of a layering violation: because IDs
are marshaled as raw strings rather than PropertyValues, the engine must
check against the marshaled form of an unknown directly (i.e.
`plugin.UnknownStringValue`).
2018-08-08 12:06:20 -07:00
Pat Gavlin 94802f5c16
Fix deletes with duplicate URNs. (#1716)
When calculating deletes, we will only issue a single delete step for a
particular URN. This is incorrect in the presence of pending deletes
that share URNs with a live resource if the pending deletes follow the
live resource in the checkpoint: instead of issuing a delete for
every resource with a particular URN, we will only issue deletes for
the pending deletes.

Before first-class providers, this was mostly benigin: any remaining
resources could be deleted by re-running the destroy. With the
first-class provider changes, however, the provider for the undeleted
resources will be deleted, leaving the checkpoint in an invalid state.

These changes fix this issue by allowing the step generator to issue
multiple deletes for a single URN and add a test for this scenario.
2018-08-07 11:01:08 -07:00
Pat Gavlin a222705143
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.

### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.

These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
  or unknown properties. This is necessary because existing plugins
  only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
  values are known or "must replace" if any configuration value is
  unknown. The justification for this behavior is given
  [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
  configures the provider plugin if all config values are known. If any
  config value is unknown, the underlying plugin is not configured and
  the provider may only perform `Check`, `Read`, and `Invoke`, all of
  which return empty results. We justify this behavior becuase it is
  only possible during a preview and provides the best experience we
  can manage with the existing gRPC interface.

### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.

All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.

Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.

### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
  provider plugins. It must be possible to add providers from a stack's
  checkpoint to this map and to register new/updated providers during
  the execution of a plan in response to CRUD operations on provider
  resources.
- In order to support updating existing stacks using existing Pulumi
  programs that may not explicitly instantiate providers, the engine
  must be able to manage the "default" providers for each package
  referenced by a checkpoint or Pulumi program. The configuration for
  a "default" provider is taken from the stack's configuration data.

The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").

The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.

During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.

While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.

### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
  to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
  a particular provider instance to manage a `CustomResource`'s CRUD
  operations.
- A new type, `InvokeOptions`, can be used to specify options that
  control the behavior of a call to `pulumi.runtime.invoke`. This type
  includes a `provider` field that is analogous to
  `ResourceOptions.provider`.
2018-08-06 17:50:29 -07:00
Sean Gillespie 0d0129e400
Execute non-delete step chains in parallel (#1673)
* Execute chains of steps in parallel

Fixes pulumi/pulumi#1624. Since register resource steps are known to be
ready to execute the moment the engine sees them, we can effectively
parallelize all incoming step chains. This commit adds the machinery
necessary to do so - namely a step executor and a plan executor.

* Remove dead code

* CR: use atomic.Value to be explicit about what values are atomically loaded and stored

* CR: Initialize atomics to 'false'

* Add locks around data structures in event callbacks

* CR: Add DegreeOfParallelism method on Options and add comment on select in Execute

* CR: Use context.Context for cancellation instead of cancel.Source

* CR: improve cancellation

* Rebase against master: execute read steps in parallel

* Please gometalinter

* CR: Inline a few methods in stepExecutor

* CR: Feedback and bug fixes

1. Simplify step_executor.go by 'bubbling' up errors as far as possible
and reporting diagnostics and cancellation in one place
2. Fix a bug where the CLI claimed that a plan was cancelled even if it
wasn't (it just has an error)

* Comments

* CR: Add comment around problematic select, move workers.Add outside of goroutine, return instead of break
2018-08-06 16:46:17 -07:00
Matt Ellis 645ca2eb56 Allow more types for runtimeOptions
Instead of just allowing booleans, type the map as string->interface{}
so in the future we could pass string or number options to the
language host.
2018-08-06 14:00:58 -07:00
Matt Ellis ce5eaa8343 Support TypeScript in a more first-class way
This change lets us set runtime specific options in Pulumi.yaml, which
will flow as arguments to the language hosts. We then teach the nodejs
host that when the `typescript` is set to `true` that it should load
ts-node before calling into user code. This allows using typescript
natively without an explicit compile step outside of Pulumi.

This works even when a tsconfig.json file is not present in the
application and should provide a nicer inner loop for folks writing
typescript (I'm pretty sure everyone has run into the "but I fixed
that bug!  Why isn't it getting picked up?  Oh, I forgot to run tsc"
problem.

Fixes #958
2018-08-06 14:00:58 -07:00
Sean Gillespie 48aa5e73f8
Save resources obtained from ".get" in the snapshot (#1654)
* Protobuf changes to record dependencies for read resources

* Add a number of tests for read resources, especially around replacement

* Place read resources in the snapshot with "external" bit set

Fixes pulumi/pulumi#1521. This commit introduces two new step ops: Read
and ReadReplacement. The engine generates Read and ReadReplacement steps
when servicing ReadResource RPC calls from the language host.

* Fix an omission of OpReadReplace from the step list

* Rebase against master

* Transition to use V2 Resources by default

* Add a semantic "relinquish" operation to the engine

If the engine observes that a resource is read and also that the
resource exists in the snapshot as a non-external resource, it will not
delete the resource if the IDs of the old and new resources match.

* Typo fix

* CR: add missing comments, DeserializeDeployment -> DeserializeDeploymentV2, ID check
2018-08-03 14:06:00 -07:00
CyrusNajmabadi 58ae413bca
Do not add a newline between a stream of info messages. The contract is that these will just be appended continguously. (#1688) 2018-08-02 10:55:15 -04:00
Sean Gillespie c4f08db78b
Fix an issue where we fail to catch duplicate URNs in the same plan (#1687) 2018-08-01 21:32:29 -07:00
Alex Clemmer b0c383e1e8 Add URN argument to HostClient#Log
This will make it possible to correlate log messages to specific
resources.
2018-08-01 11:00:48 -07:00
Sean Gillespie ad3b931bea
Merge pull request #1664 from pulumi/swgillespie/resource-break
Round out the breaking changes for ResourceV2
2018-07-27 18:44:37 -07:00
Pat Gavlin d70d36b887
Handle non-{unknown,partial} statuses in Create/Update. (#1663)
For example resource.StatusOK. Unenlightened providers may return errors
that result in this status; we should not crash if this is the case.
2018-07-27 16:08:11 -07:00
Sean Gillespie 48d095d66f Remove "down" operation for migration - it is not semantically valid in
general.
2018-07-27 14:52:28 -07:00
Alex Clemmer f037c7d143 Checkpoint resource initialization errors
When a resource fails to initialize (i.e., it is successfully created,
but fails to transition to a fully-initialized state), and a user
subsequently runs `pulumi update` without changing that resource, our
CLI will fail to warn the user that this resource is not initialized.

This commit begins the process of allowing our CLI to report this by
storing a list of initialization errors in the checkpoint.
2018-07-20 17:59:06 -07:00
Sean Gillespie 80c28c00d2
Add a migrate package for migrating to and from differently-versioned API types (#1647)
* Add a migrate package for migrating to and from differently-versioned
API types

* travis: gofmt -s deployment_test.go
2018-07-20 13:31:41 -07:00
Sean Gillespie 57ae7289f3
Work around a potentially bad assert in the engine (#1644)
* Work around a potentially bad assert in the engine

The engine asserts if presented with a plan that deletes the same URN
more than once. This has been empirically proven to be possible, so I am
removing the assert.

* CR: Add log for multiple pending-delete deletes
2018-07-18 15:48:55 -07:00
Alex Clemmer d182525fec Add signal cancellation to resource provider 2018-07-15 11:05:44 -10:00
Sean Gillespie 89f2f8abb5
[Parallelism] Introduce a "step generator" component in the engine (#1622)
* [Parallelism] Introduce a "step generator" component by refactoring all
step generation logic out of PlanIterator

* CR: remove dead fields on PlanIterator
2018-07-12 14:24:38 -07:00
CyrusNajmabadi 4761a32cc1
Add support for providing a log stream-id to our RPC interface. (#1627) 2018-07-11 15:04:00 -07:00
Alex Clemmer 582a002ccb FIX: Build break in partial update integration test 2018-07-06 18:03:42 -07:00
Alex Clemmer db9277f934
Merge pull request #1590 from hausdorff/hausdorff/partial-update
Add partial state update
2018-07-06 16:14:49 -07:00
Alex Clemmer 456deaf442 Small cleanups and comments 2018-07-06 15:57:08 -07:00
Joe Duffy b9a0907161
Support empty text assets (#1599)
Per #1481, we unintentionally disallowed empty text assets, simply
because of the way we did discriminated union testing.  It's easy
enough to just treat the empty asset like an empty string asset.
2018-07-05 14:30:35 -07:00
Sean Gillespie 1cbf8bdc40 Partial status for resource providers
This commit adds CLI support for resource providers to provide partial
state upon failure. For resource providers that model resource
operations across multiple API calls, the Provider RPC interface can now
accomodate saving bags of state for resource operations that failed.
This is a common pattern for Terraform-backed providers that try to do
post-creation steps on resource as part of Create or Update resource
operations.
2018-07-02 13:32:23 -07:00
Pat Gavlin 0c8dedec22
Issue an error when attempting to read an invalid Asset. (#1528)
As an alternative we could consider simply returning an empty `Blob`.
I think it's better to fail loudly, though, as this situation can result
from unexpected/invalid archives (which we need to do a better job of
validating).

Fixes #1493.
2018-06-18 11:08:17 -07:00
Matthew Riley bdb3e98aa0 Use an even more reasonable modtime in ZIP entries
The ZIP format started with MS-DOS dates, which start in 1980. Other dates
have been layered on, but the ZIP file handler used by Azure websites still
relies on the MS-DOS dates.

Using the Unix epoch here (1970) results in ZIP entries that (e.g.)
OSX `unzip` sees as 12-31-1969 (timezones) but Azure websites sees as
01/01/2098.
2018-06-11 19:36:04 -07:00
Pat Gavlin e6849a283f Appease linters.
- Fix a couple self-assignment issues in the Go language support
- Disable `megacheck` for `fh.SetModTime`, which we use for go1.9
  compat.
2018-06-11 14:32:27 -07:00
joeduffy 7d8995991b Support Pulumi programs written in Go
This adds rudimentary support for Pulumi programs written in Go.  It
is not complete yet but the basic resource registration works.

Note that, stylistically speaking, Go is a bit different from our other
languages.  This made it a bit easier to build this initial prototype,
since what we want is actually a rather thin veneer atop our existing
RPC interfaces.  The lack of generics, however, adds some friction and
is something I'm continuing to hammer on; this will most likely lead to
little specialized types (e.g. StringOutput) once the dust settles.

There are two primary components:

1) A new language host, `pulumi-language-go`, which is responsible for
   communicating with the engine through the usual gRPC interfaces.
   Because Go programs are pre-compiled, it very simply loads a binary
   with the same name as the project.

2) A client SDK library that Pulumi programs bind against.  This exports
   the core resource types -- including assets -- properties -- including
   output properties -- and configuration.

Most remaining TODOs are marked as such in the code, and this will not
be merged until they have been addressed, and some better tests written.
2018-06-08 10:36:10 -07:00
Matthew Riley 9916cd5e6b
Merge pull request #1475 from pulumi/zip-header
Use a reasonable value for Modified in ZIP headers
2018-06-07 14:40:10 -07:00
Matthew Riley 2fe284a139 Use a reasonable value for Modified in ZIP headers
Azure websites can't extract the archive without this.

We can't use the `Modified` field of `FileHeader` because it was added in
Go 1.10.
2018-06-07 14:21:03 -07:00
Sean Gillespie b5e4d87687
Improve the error message when data source invocations fail (#1472) 2018-06-07 11:21:38 -07:00
Sean Gillespie 924c49d7e0
Fail fast when attempting to load a too-new or too-old deployment (#1382)
* Error when loading a deployment that is not a version that the CLI understands

* Add a test for 'pulumi stack import' on a badly-versioned deployment

* Move current deployment version to 'apitype'

* Rebase against master

* CR: emit CLI-friendly error message at the two points outside of the engine calling 'DeserializeDeployment'
2018-05-25 13:29:59 -07:00
Sean Gillespie 1a51507206
Delete Before Create (#1365)
* Delete Before Create

This commit implements the full semantics of delete before create. If a
resource is replaced and requires deletion before creation, the engine
will use the dependency graph saved in the snapshot to delete all
resources that depend on the resource being replaced prior to the
deletion of the resource to be replaced.

* Rebase against master

* CR: Simplify the control flow in makeRegisterResourceSteps

* Run Check on new inputs when re-creating a resource

* Fix an issue where the planner emitted benign but incorrect deletes of DBR-deleted resources

* CR: produce the list of dependent resources in dependency order and iterate over the list in reverse

* CR: deps->dependents, fix an issue with DependingOn where duplicate nodes could be added to the dependent set

* CR: Fix an issue where we were considering old defaults and new inputs
inappropriately when re-creating a deleted resource

* CR: save 'iter.deletes[urn]' as a local, iterate starting at cursorIndex + 1 for dependency graph
2018-05-23 14:43:17 -07:00
joeduffy 5967259795 Add license headers 2018-05-22 15:02:47 -07:00
Sean Gillespie 7b7870cdaa
Remove unused stack name from deploy.Snapshot (#1386) 2018-05-18 11:15:35 -07:00
Sean Gillespie 68911900fd
Graceful shutdown (#1320)
* Graceful RPC shutdown: CLI side

* Handle unavailable resource monitor in language hosts

* Fix a comment

* Don't commit package-lock.json

* fix mangled pylint pragma

* Rebase against master and fix Gopkg.lock

* Code review feedback

* Fix a race between closing the callerEventsOpt channel and terminating a goroutine that writes to it

* glog -> logging
2018-05-16 15:37:34 -07:00
CyrusNajmabadi 72e00810c4
Filter the logs we emit to glog so that we don't leak out secrets. (#1371) 2018-05-15 15:28:00 -07:00
Joe Duffy 369c619ab9
Skip loading language plugins when not needed (#1367)
In pulumi/pulumi#1356, we observed that we can fail during a destroy
because we attempt to load the language plugin, which now eagerly looks
for the @pulumi/pulumi package.

This is also blocking ingestion of the latest engine bits into the PPC.

It turns out that for destroy (and refresh), we have no need for the
language plugin.  So, let's skip loading it when appropriate.
2018-05-14 20:32:53 -07:00
Joe Duffy 457c34ff50
Add a PULUMI_DEV flag, and suppress warnings (#1361)
This change suppresses the warning

    warning: resource plugin aws is expected to have version >=0.11.3,
        but has 0.11.1-dev-1523506162-g06ec765; the wrong version may
        be on your path, or this may be a bug in the plugin

when the PULUMI_DEV envvar is set to a truthy value.

This warning keeps popping up in demos since I'm always using dev builds
and I'd like a way to shut it off, even though this can legitimately
point out a problem.  Eventually I'll switch to official buildsa but,
until then, it seems worth having a simple way to suppress.
2018-05-11 20:59:01 -07:00
Pat Gavlin bb5b7da650 Revert "Revert the changes from #1261." 2018-05-08 11:46:15 -07:00
Pat Gavlin 97ace29ab1
Begin tracing Pulumi API calls. (#1330)
These changes enable tracing of Pulumi API calls.

The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.

In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
2018-05-07 18:23:03 -07:00
CyrusNajmabadi 092696948d
Restore streaming of plugin outputs to the progress display. (#1333) 2018-05-07 15:11:52 -07:00
Joe Duffy f92eb0a4e8 Run Configure calls in parallel (#1321)
As of this change, the engine will run all Configure calls in parallel.
This improves startup performance, since otherwise, we would block
waiting for all plugins to be configured before proceeding to run a
program.  Emperically, this is about 1.5-2s for AWS programs, and
manifests as a delay between the purple "Previewing update of stack"
being printed, and the corresponding grey "Previewing update" message.

This is done simply by using a Goroutine for Configure, and making sure
to synchronize on all actual CRUD operations.  I toyed with using double
checked locking to eliminate lock acquisitions -- something we may want
to consider as we add more fine-grained parallelism -- however, I've
kept it simple to avoid all the otherwise implied memory model woes.

I made the judgment call that GetPluginInfo may proceed before
Configure has settled.  (Otherwise, we'd immediately call it and block
after loading the plugin, obviating the parallelism benefits.)  I also
made the judgment call to do this in the engine, after flip flopping
several times about possibly making it a provider's own decision.
2018-05-04 14:29:47 -07:00
Pat Gavlin 639e605bed Revert the changes from #1261.
Restore the provider-first diff logic these changes disabled.

Part of #1251.
2018-05-01 10:01:18 -07:00
Luke Hoban 7dac925bcf
Fix handling of nested archives (#1283)
AssetArchives which include nested archives were not embedding content from the nested archive underneath the key of the nest archive.

Fixes #1272.
2018-04-27 17:10:50 -07:00
Sean Gillespie 14baf866f6
Snapshot management overhaul and refactor (#1273)
* Refactor the SnapshotManager interface

Lift snapshot management out of the engine by delegating it to the
SnapshotManager implementation in pkg/backend.

* Add a event interface for plugin loads and use that interface to record plugins in the snapshot

* Remove dead code

* Add comments to Events

* Add a number of tests for SnapshotManager

* CR feedback: use a successful bit on 'End' instead of having a separate 'Abort' API

* CR feedback

* CR feedback: register plugins one-at-a-time instead of the entire state at once
2018-04-25 17:20:08 -07:00
pat@pulumi.com ca2e996428 Work around issue #1251.
This issue arises becuase the behavior we're currently getting from Diff
for TF-based providers differs from the behavior we expect. We are
presenting the provider with the old state and new inputs. If the old
state contains output properties that differ from the new inputs, the
provider will detect a diff where we may expect no changes.

Rather than deferring to the provider for all diffs, these changes only
defer to the provider if a legacy diff was detected (i.e. there is some
difference between the old and new provider-calculated inputs).
2018-04-24 10:35:27 -07:00
pat@pulumi.com 42b67a4dbc Fix a couple of typos.
- Restore a dropped "update"
- %s/Resouce/Resource/g
2018-04-20 11:09:54 -07:00
Matt Ellis cc938a3bc8 Merge remote-tracking branch 'origin/master' into ellismg/identity 2018-04-20 01:56:41 -04:00
joeduffy bac58d7922 Respond to CR feedback
Incorporate feedback from @swgillespie and @pgavlin.
2018-04-18 11:46:37 -07:00
joeduffy b77403b4bb Implement a refresh command
This change implements a `pulumi refresh` command.  It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.

It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events.  This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state.  This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion).  The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.

Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint.  That
will be coming soon ...
2018-04-18 10:57:16 -07:00
Matt Ellis b5129dba19 Don't leak a file from asset_test.go 2018-04-18 04:54:02 -07:00
Matt Ellis bac02f1df1 Remove the need to pulumi init for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.

Previously, `pulumi init` logically did two things:

1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.

2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com

The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.

However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.

For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.

For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.

This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).

With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-18 04:53:49 -07:00