Commit graph

1857 commits

Author SHA1 Message Date
Pat Gavlin
fa05e5cb05
Migrate old providers without outputs. (#2973)
If we encounter a provider with old inputs but no old outputs when reading
a checkpoint file, use the old inputs as the old outputs. This handles the
scenario where the CLI is being upgraded from a version that did not
reflect provider inputs to provider outputs, and a provider is being
upgraded from a version that did not implement `DiffConfig` to a version
that does.

Fixes https://github.com/pulumi/pulumi-kubernetes/issues/645.
2019-07-23 13:39:21 -07:00
Alex Clemmer
ed5b8437d1 Batch policy violation reporting for pulumi preview
Currently, `pulumi preview` fails immediately when any resource
definition in a Pulumi app is found to be in violation of a resource
policy. But, users would like `preview` to report as many policy
violations as it can before terminating with an error, so that they can
fix many of them before running `preview` again.

This commit will thus change `pulumi preview` to do this sort of
"batching" for policy violations. The engine will attempt to run the
entire preview step, validating every resource definition with the
relevant known resource policies, before finally reporting an error if
any violations are detected.

Fixes pulumi/pulumi-policy#31
2019-07-22 20:42:17 -07:00
Alex Clemmer
b9c349419d Don't hide policy violation errors in CLI rendering
The current CLI update view attributes all policy violation errors to
the root Stack resource. This commit will attribute them to the resource
that violated the policy.

The reason for this mis-attribution is a simple bookkeeping error:

* Resource policies intercept and prevent RegisterResource requests for
  when the resource in question violates some policy.
* The CLI "tree" view of resources "hides" rows for resources that have
  not been registered. Thus, if a policy violation occurs for a
  resource, it becomes "orphaned" and is attributed to the stack,
  because there is no row for the resource that violates the policy.

The solution, thus, is to simply set the "hidden" flag to false when we
encounter a policy violation.

Fixes pulumi/pulumi-policy#25
2019-07-22 20:42:17 -07:00
Paul Stack
67194cddfd
Creation of generator package (#2970)
Fixes: #2151

This will allow us to be able to share the code that generates our
language providers. Currently there is a copy of the python code
generation in pulumi-kubernetes and also in pulumi-terraform

We want to be able to share these
2019-07-22 17:09:35 -07:00
Chris Smith
f6379fae05
Fix 'pulumi new' to support creating stacks in an org (#2950)
* Fix 'pulumi new' to support creating stacks in an org

* Fix tests
2019-07-22 10:12:26 -07:00
Alex Clemmer
716a69ced2 Keep unknowns when marshalling resources in call to Analyze
Fixes pulumi/pulumi-aws#661.
2019-07-19 12:23:36 -07:00
Luke Hoban
3768e5c690
Python Dynamic Providers (#2900)
Dynamic providers in Python.

This PR uses [dill](https://pypi.org/project/dill/) for code serialization, along with a customization to help ensure deterministic serialization results.

One notable limitation - which I believe is a general requirement of Python - is that any serialization of Python functions must serialize byte code, and byte code is not safely versioned across Python versions.  So any resource created with Python `3.x.y` can only be updated by exactly the same version of Python.  This is very constraining, but it's not clear there is any other option within the realm of what "dynamic providers" are as a feature.  It is plausible that we could ensure that updates which only update the serialized provider can avoid calling the dynamic provider operations, so that version updates could still be accomplished.  We can explore this separately.

```py
from pulumi import ComponentResource, export, Input, Output
from pulumi.dynamic import Resource, ResourceProvider, CreateResult, UpdateResult
from typing import Optional
from github import Github, GithubObject

auth = "<auth token>"
g = Github(auth)

class GithubLabelArgs(object):
    owner: Input[str]
    repo: Input[str]
    name: Input[str]
    color: Input[str]
    description: Optional[Input[str]]
    def __init__(self, owner, repo, name, color, description=None):
        self.owner = owner
        self.repo = repo
        self.name = name
        self.color = color
        self.description = description

class GithubLabelProvider(ResourceProvider):
    def create(self, props):
        l = g.get_user(props["owner"]).get_repo(props["repo"]).create_label(
            name=props["name"],
            color=props["color"],
            description=props.get("description", GithubObject.NotSet))
        return CreateResult(l.name, {**props, **l.raw_data}) 
    def update(self, id, _olds, props):
        l = g.get_user(props["owner"]).get_repo(props["repo"]).get_label(id)
        l.edit(name=props["name"],
               color=props["color"],
               description=props.get("description", GithubObject.NotSet))
        return UpdateResult({**props, **l.raw_data})
    def delete(self, id, props):
        l = g.get_user(props["owner"]).get_repo(props["repo"]).get_label(id)
        l.delete()

class GithubLabel(Resource):
    name: Output[str]
    color: Output[str]
    url: Output[str]
    description: Output[str]
    def __init__(self, name, args: GithubLabelArgs, opts = None):
        full_args = {'url':None, 'description':None, 'name':None, 'color':None, **vars(args)}
        super().__init__(GithubLabelProvider(), name, full_args, opts)

label = GithubLabel("foo", GithubLabelArgs("lukehoban", "todo", "mylabel", "d94f0b"))

export("label_color", label.color)
export("label_url", label.url)
```


Fixes https://github.com/pulumi/pulumi/issues/2902.
2019-07-19 10:18:25 -07:00
Alex Clemmer
0850c88a97 Address comments 2019-07-16 00:58:33 -07:00
Alex Clemmer
4c069d5cf6 Address lint warnings 2019-07-16 00:58:33 -07:00
Alex Clemmer
9f809b9122 Run required policies as part of all updates 2019-07-16 00:58:33 -07:00
Alex Clemmer
826e6a1cca Add pulumi policy apply command 2019-07-16 00:58:33 -07:00
Alex Clemmer
c7e1f19733 Allow Pulumi service HTTP client to unmarshal body as raw bytes
This commit will allow the Pulumi service HTTP client to deserialize
HTTP responses that have bodies encoded as `application/octet-stream` to
be deserialized as `[]byte`.

This fixes a small bug that causes the HTTP client to fail under these
circumstances, as it expects any body to be JSON-deserializable.
2019-07-16 00:58:33 -07:00
Alex Clemmer
3fc03167c5 Implement cmd/run-policy-pack
This command will cause `pulumi policy publish` to behave in much the
same way `pulumi up` does -- if the policy program is in TypeScript, we
will use ts-node to attempt to compile in-process before executing, and
fall back to plain-old node.

We accomplish this by moving `cmd/run/run.ts` into a generic helper
package, `runtime/run.ts`, which slightly generalizes the use cases
supported (notably, allowing us to exec some program outside of the
context of a Pulumi stack).

This new package is then called by both `cmd/run/index.ts` and
`cmd/run-policy-pack/index.ts`.
2019-07-16 00:58:33 -07:00
Alex Clemmer
c93a860574 Add PolicyPack abstraction with Publish verb
This commit will implement the core business logic of `pulumi policy
publish` -- code to boot an analyzer, ask it for metadata about the
policies it contains, pack the code, and transmit all of this to the
Pulumi service.
2019-07-16 00:58:33 -07:00
Alex Clemmer
9e9f7f07d3 Re-introduce pkg/util/archive
When a user runs `pulumi policy publish`, we need to package up a
directory of code and send it to the service. We implemented this once
before, for PPCs, so this simply re-introduces that code as it was in
the commit that deleted it.
2019-07-16 00:58:33 -07:00
Alex Clemmer
fc80eaaa3d Implement GetAnalyzerInfo in analyzer plugin 2019-07-16 00:58:33 -07:00
Paul Stack
02ffff8840
Addition of Custom Timeouts (#2885)
* Plumbing the custom timeouts from the engine to the providers

* Plumbing the CustomTimeouts through to the engine and adding test to show this

* Change the provider proto to include individual timeouts

* Plumbing the CustomTimeouts from the engine through to the Provider RPC interface

* Change how the CustomTimeouts are sent across RPC

These errors were spotted in testing. We can now see that the timeout
information is arriving in the RegisterResourceRequest

```
req=&pulumirpc.RegisterResourceRequest{
           Type:                    "aws:s3/bucket:Bucket",
           Name:                    "my-bucket",
           Parent:                  "urn:pulumi:dev::aws-vpc::pulumi:pulumi:Stack::aws-vpc-dev",
           Custom:                  true,
           Object:                  &structpb.Struct{},
           Protect:                 false,
           Dependencies:            nil,
           Provider:                "",
           PropertyDependencies:    {},
           DeleteBeforeReplace:     false,
           Version:                 "",
           IgnoreChanges:           nil,
           AcceptSecrets:           true,
           AdditionalSecretOutputs: nil,
           Aliases:                 nil,
           CustomTimeouts:          &pulumirpc.RegisterResourceRequest_CustomTimeouts{
               Create:               300,
               Update:               400,
               Delete:               500,
               XXX_NoUnkeyedLiteral: struct {}{},
               XXX_unrecognized:     nil,
               XXX_sizecache:        0,
           },
           XXX_NoUnkeyedLiteral: struct {}{},
           XXX_unrecognized:     nil,
           XXX_sizecache:        0,
       }
```

* Changing the design to use strings

* CHANGELOG entry to include the CustomTimeouts work

* Changing custom timeouts to be passed around the engine as converted value

We don't want to pass around strings - the user can provide it but we want
to make the engine aware of the timeout in seconds as a float64
2019-07-16 00:26:28 +03:00
Erin Krengel
433c7ed7e5
add GetPolicyPackResponse (#2923) 2019-07-12 13:10:05 -07:00
Pat Gavlin
e1a52693dc
Add support for importing existing resources. (#2893)
A resource can be imported by setting the `import` property in the
resource options bag when instantiating a resource. In order to
successfully import a resource, its desired configuration (i.e. its
inputs) must not differ from its actual configuration (i.e. its state)
as calculated by the resource's provider.

There are a few interesting state transitions hiding here when importing
a resource:
1. No prior resource exists in the checkpoint file. In this case, the
   resource is simply imported.
2. An external resource exists in the checkpoint file. In this case, the
   resource is imported and the old external state is discarded.
3. A non-external resource exists in the checkpoint file and its ID is
   different from the ID to import. In this case, the new resource is
   imported and the old resource is deleted.
4. A non-external resource exists in the checkpoint file, but the ID is
   the same as the ID to import. In this case, the import ID is ignored
   and the resource is treated as it would be in all cases except for
   changes that would replace the resource. In that case, the step
   generator issues an error that indicates that the import ID should be
   removed: were we to move forward with the replace, the new state of
   the stack would fall under case (3), which is almost certainly not
   what the user intends.

Fixes #1662.
2019-07-12 11:12:01 -07:00
Pat Gavlin
760db30117
Fix yarn overrides in the test framework. (#2925)
The section for manual resolutions is named "resolutions", not
"overrides".
2019-07-11 16:16:22 -07:00
Pat Gavlin
4dd4943ef3
Fix indentation when rendering detailed diffs. (#2915)
Just what it says on the tin. The indent needs to be bumped by one when
rendering these diffs.
2019-07-09 15:48:36 -07:00
Pat Gavlin
ed46891693
Compute nested diffs in translateDetailedDiff. (#2911)
Instead of simply converting a detailed diff entry that indicates an
update to an entire composite value as a simple old/new value diff,
compute the nested diff. This alllows us to render a per-element diff
for the nested object rather than simply displaying the new and the old
composite values.

This is necessary in order to improve diff rendering once
pulumi/pulumi-terraform#403 has been rolled out.
2019-07-08 16:33:21 -07:00
Matt Ellis
f11f4f7498
Merge pull request #2890 from Charliekenney23/only-print-emojis-in-interactive
Don't print emojis in non-interactive mode
2019-07-02 16:48:32 -07:00
Mikhail Shilkov
bc542e2dc4 Allow specifying a local path to templates for pulumi new (#2884)
* Allow specifying a local path to templates for pulumi new

* Add CHANGELOG

* gofmt

* Add tests

* Linting error
2019-07-01 14:40:55 -07:00
Mikhail Shilkov
e30e6208a0 Normalize Windows paths for directory archive (#2887)
* Normalize Windows paths for directory archive

* Changelog

* Remove the redundant check
2019-07-02 00:04:24 +03:00
Pat Gavlin
6e5c4a38d8
Defer all diffs to resource providers. (#2849)
Thse changes make a subtle but critical adjustment to the process the
Pulumi engine uses to determine whether or not a difference exists
between a resource's actual and desired states, and adjusts the way this
difference is calculated and displayed accordingly.

Today, the Pulumi engine get the first chance to decide whether or not
there is a difference between a resource's actual and desired states. It
does this by comparing the current set of inputs for a resource (i.e.
the inputs from the running Pulumi program) with the last set of inputs
used to update the resource. If there is no difference between the old
and new inputs, the engine decides that no change is necessary without
consulting the resource's provider. Only if there are changes does the
engine consult the resource's provider for more information about the
difference. This can be problematic for a number of reasons:

- Not all providers do input-input comparison; some do input-state
  comparison
- Not all providers are able to update the last deployed set of inputs
  when performing a refresh
- Some providers--either intentionally or due to bugs--may see changes
  in resources whose inputs have not changed

All of these situations are confusing at the very least, and the first
is problematic with respect to correctness. Furthermore, the display
code only renders diffs it observes rather than rendering the diffs
observed by the provider, which can obscure the actual changes detected
at runtime.

These changes address both of these issues:
- Rather than comparing the current inputs against the last inputs
  before calling a resource provider's Diff function, the engine calls
  the Diff function in all cases.
- Providers may now return a list of properties that differ between the
  requested and actual state and the way in which they differ. This
  information will then be used by the CLI to render the diff
  appropriately. A provider may also indicate that a particular diff is
  between old and new inputs rather than old state and new inputs.

Fixes #2453.
2019-07-01 12:34:19 -07:00
Charles Kenney
9b4a85d57c dont print emojis in non-interactive mode 2019-06-30 01:35:19 -04:00
Chris Smith
997516a7b8
Persist engine events in batches (#2860)
* Add EngineEventsBatch type

* Persist engine events in batches

* Reenable ee_perf test

* Limit max concurrent EE requests

* Address PR feedback
2019-06-28 09:40:21 -07:00
Chris Smith
f59a934044
Add EngineEventsBatch type (#2858) 2019-06-23 16:56:09 -07:00
CyrusNajmabadi
7b8421f0b2
Fix crash when there were multiple duplicate aliases to the same resource. (#2865) 2019-06-23 02:16:18 -07:00
Erin Krengel
93c2736e2d
move requiredPolicies to UpdateProgramResponse (#2850) 2019-06-19 16:42:02 -07:00
CyrusNajmabadi
ef3cad6bf1
Reads should not cause resources to be displayed in our progress display (#2844) 2019-06-18 15:38:32 -07:00
Matt Ellis
eb3a7d0a7a Fix up some spelling errors
@keen99 pointed out that newer versions of golangci-lint were failing
due to some spelling errors. This change fixes them up.  We have also
now have a work item to track moving to a newer golangci-lint tool in
the future.

Fixes #2841
2019-06-18 15:30:25 -07:00
Matt Ellis
0b4d94a239
Merge pull request #2813 from dreamteam-gg/backend-config
Backend setting in project config
2019-06-14 08:31:28 -07:00
Alex Clemmer
0fc4bc7885 Remove policy ID from policy API 2019-06-13 17:39:30 -07:00
Matt Ellis
e0f7dc17cd Guard against proj.Backend being nil 2019-06-13 16:26:31 -07:00
Alex Clemmer
8b7d329c69 Use Analyzer PB in analyzer code 2019-06-13 16:04:13 -07:00
Artem Yarmoluk
f1b5fb6e0f
Backend setting in project config
Signed-off-by: Artem Yarmoluk <koolgen@gmail.com>
2019-06-13 20:02:03 +03:00
Matt Ellis
1e519ce150 Fix a bug when logging into bucket urls
Currently if you log into s3://bucket/subdirectory, Pulumi will write
files to s3://bucket/.pulumi and not s3://bucket/subdirectory/.pulumi,
this corrects the error.
2019-06-10 13:43:17 -07:00
Pat Gavlin
dfa120e6b4
Check for provider changes in mustWrite (#2805)
Recent changes to default provider semantics and the addition of
resource aliases allow a resource's provider reference to change even if
the resource itself is considered to have no diffs. `mustWrite` did not
expect this scenario, and indeed asserted against it. These changes
update `mustWrite` to detect such changes and require that the
checkpoint be written if and when they occur.

Fixes #2804.
2019-06-05 16:27:26 -07:00
Erin Krengel
96f82f004f
Ekrengel/pac apitypes (#2798) 2019-06-04 10:27:53 -07:00
Alex Clemmer
02788b9b32 Implement listResourceOutputs in the Node.js SDK
This commit will expose the new `Invoke` routine that lists resource
outputs through the Node.js SDK.

This API is implemented via a new API, `EnumerablePromise`, which is a
collection of simple query primitives built onto the `Promise` API. The
query model is lazy and LINQ-like, and generally intended to make
`Promise` simpler to deal with in query scenarios. See #2601 for more
details.

Fixes #2600.
2019-06-03 14:56:49 -07:00
Alex Clemmer
bcc17c8768 Return errors from query programs through the console 2019-06-03 14:56:49 -07:00
Sean Gillespie
2870518a64 Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.

For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".

Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.

To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 12:16:31 -07:00
Matt Ellis
c201d92380 Use server information from NodeJS host for fetching plugins 2019-06-03 09:31:18 -07:00
Matt Ellis
917f3738c5 Add --server to pulumi plugin install
Previously, when the CLI wanted to install a plugin, it used a special
method, `DownloadPlugin` on the `httpstate` backend to actually fetch
the tarball that had the plugin. The reason for this is largely tied
to history, at one point during a closed beta, we required presenting
an API key to download plugins (as a way to enforce folks outside the
beta could not download them) and because of that it was natural to
bake that functionality into the part of the code that interfaced with
the rest of the API from the Pulumi Service.

The downside here is that it means we need to host all the plugins on
`api.pulumi.com` which prevents community folks from being able to
easily write resource providers, since they have to manually manage
the process of downloading a provider to a machine and getting it on
the `$PATH` or putting it in the plugin cache.

To make this easier, we add a `--server` argument you can pass to
`pulumi plugin install` to control the URL that it attempts to fetch
the tarball from. We still have perscriptive guidence on how the
tarball must be
named (`pulumi-[<type>]-[<provider-name>]-vX.Y.Z.tar.gz`) but the base
URL can now be configured.

Folks publishing packages can use install scripts to run `pulumi
plugin install` passing a custom `--server` argument, if needed.

There are two improvements we can make to provide a nicer end to end
story here:

- We can augment the GetRequiredPlugins method on the language
  provider to also return information about an optional server to use
  when downloading the provider.

- We can pass information about a server to download plugins from as
  part of a resource registration or creation of a first class
  provider.

These help out in cases where for one reason or another where `pulumi
plugin install` doesn't get run before an update takes place and would
allow us to either do the right thing ahead of time or provide better
error messages with the correct `--server` argument. But, for now,
this unblocks a majority of the cases we care about and provides a
path forward for folks that want to develop and host their own
resource providers.
2019-06-03 09:31:18 -07:00
Luke Hoban
15e924b5cf
Support aliases for renaming, re-typing, or re-parenting resources (#2774)
Adds a new resource option `aliases` which can be used to rename a resource.  When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource.

There are two key places this change is implemented. 

The first is the step generator in the engine.  When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource.  That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order).  This can allow the resource to be matched as a (potential) update to an existing resource with a different URN.

The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs.  This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases.  It is similar to how many other resource options are "inherited" implicitly from the parent.

Four specific scenarios are explicitly tested as part of this PR:
1. Renaming a resource
2. Adopting a resource into a component (as the owner of both component and consumption codebases)
3. Renaming a component instance (as the owner of the consumption codebase without changes to the component)
4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase)
4. Combining (1) and (3) to make both changes to a resource at the same time
2019-05-31 23:01:01 -07:00
Matt Ellis
9a77d72403 Set Outputs for providers in the state file. (#2793)
We model providers as resources in our state file, but we were
neglecting to set Outputs for these resources.  This was problematic
when we started to try to run DiffConfig, because when diffing a
resource we compare thed new inputs and the old outputs, but the
resource never had any old outputs, so it was impossible for the
provider to see what the old state of the resource was.

To fix this, we now reflect the inputs we use the create the provider
reference as outputs on the resource.
2019-05-31 15:14:42 -07:00
Pat Gavlin
6756c7ccec
Use new.{URN,Type,Provider} in applicable Steps. (#2787)
Just what it says on the tin. These changes are in support of the
aliasing work in #2774.
2019-05-30 17:48:00 -07:00
Praneet Loke
5ac446fffd
Add CI vars for Bitbucket Pipelines (#2783)
* Introduce a new package under ciutil for individual CI systems. Split-out each CI system with env var detection for each.

* Add Bitbucket Piplines env var detection.

* Update changelog with note about adding Bitbucket Pipelines detection.

* Rename the CI system structs.

* Move files from ciutil/systems to ciutil. Un-export some types that don't need visibility beyond the ciutil package.

* Un-export DetectSystem function and the System type.

* Add a test for CI systems which we only know by name and nothing else, i.e. those with just a baseCI implementation.
2019-05-30 17:35:41 -07:00
Matt Ellis
00bf458e39
Merge pull request #2766 from pulumi/ellismg/diff-check-config
Implement DiffConfig/CheckConfig for plugin based providers
2019-05-30 10:36:02 -07:00
Matt Ellis
e8487ad87f Workaround a bug in the kubernetes provider
The Kubernetes provider wanted to return Unimplemented for both
DiffConfig and CheckConfig. However, due to an interaction between the
package we used to construct the error we are returning and the
package we are using to actually construct the gRPC server for the
provider, we ended up in a place where the provider would actually end
up returning an error with code "Unknown", and the /text/ of the
message included information about it being due to the RPC not being
implemented.

So, when we try to call Diff/Check config on the provider, detect this
case as well and treat messages of this shape as if the provider just
returned "Unimplemented".
2019-05-29 11:53:10 -07:00
Praneet Loke
bf3325d9c3
Remove the GitHubLogin and GitHubRepo update metadata keys (#2732) 2019-05-29 11:22:59 -07:00
Joe Duffy
bf75fe0662
Suppress JSON outputs in preview correctly (#2771)
If --suppress-outputs is passed to `pulumi preview --json`, we
should not emit the stack outputs. This change fixes pulumi/pulumi#2765.

Also adds a test case for this plus some variants of updates.
2019-05-25 12:10:38 +02:00
Matt Ellis
0574d8cd6f Attempt to download plugins before doing a refresh
Like `preview`, `update` and `destroy` we should ensure any plugins
that are listed in the snapshot are present.

Fixes #2669
2019-05-24 16:12:22 -07:00
Matt Ellis
261f012223 Correctly handle CheckConfig/DiffConfig and dynamic provider
In 3621c01f4b, we implemented
CheckConfig/DiffConfig incorrectly. We should have explicilty added
the handlers (to supress the warnings we were getting) but returned an
error saying the RPC was not implemented.  Instead, we just returned
success but passed back bogus data.  This was "fine" at the time
because nothing called these methods.

Now that we are actually calling them, returning incorrect values
leads to errors in grpc. To deal with this we do two things:

1. Adjust the implementations in the dynamic provider to correctly
return not implemented. This allows us to pick up the default engine
behavior going forward.

2. Add some code in CheckConfig/DiffConfig that handle the gRPC error
that is returned when calling methods on the dynamic provider and fall
back to the legacy behavior. This means updating your CLI will not
cause issues for existing resources where the SDK has not been
updated.
2019-05-23 13:34:47 -07:00
Matt Ellis
8397ae447f Implement DiffConfig/CheckConfig for plugins 2019-05-23 13:34:34 -07:00
Matt Ellis
f897bf8b4b Flow allowUnknows for Diff/Check Config
We pass this information for Diff and Check on specific resources, so
we can correctly block unknows from flowing to plugins during applies.
2019-05-23 10:54:18 -07:00
Matt Ellis
e574f33fa0 Include URN as an argument in DiffConfig/CheckConfig
For provider plugins, the gRPC interfaces expect that a URN would be
included as part of the DiffConfig/CheckConfig request, which means we
need to flow this value into our Provider interface.

This change does that.
2019-05-23 10:43:22 -07:00
Chris Smith
6577784114
Reduce overhead of emitting engine events (#2735)
* Reduce size of most common apitype EngineEvents

* Update endpoint names for trace logging
2019-05-22 12:22:40 -07:00
Matt Ellis
61bff0c3a4 Do not parse version from resource providers
Until we can come up with a solution for #2753, just ignore the
version that comes in as part of a resource monitor RPC.
2019-05-21 19:20:18 -07:00
Matt Ellis
31bd463264 Gracefully handle the case where secrets_provider is uninitalized
A customer reported an issue where operations would fail with the
following error:

```
error: could not deserialize deployment: unknown secrets provider type
```

The problem here was the customer's deployment had a
`secrets_provider` section which looked like the following:

```
"secrets_providers": {
    "type": ""
}
```

And so our code to try to construct a secrets manager from this thing
would fail, as our registry does not contain any information about a
provider with an empty type.

We do two things in this change:

1. When serializing a deployment, if there is no secrets manager,
don't even write the `secrets_provider` block. This helps for cases
where we are roundtripping deployments that did not have a provider
configured (i.e. they were older stacks that did not use secrets)

2. When deserializing, if we see an empty secrets provider like the
above, interpret it to mean "this deployment has no secrets". We set
up a decrypter such that if it ends up haiving secrets, we panic
eagerly (since this is a logical bug in our system somewhere).
2019-05-21 17:11:54 -07:00
CyrusNajmabadi
2246a97c17
Always normalize paths to forward slashes to properly work with gocloud (#2747) 2019-05-20 14:46:00 -04:00
Matt Ellis
7ca9721d23 Apply colorization to the ---outputs--- line
We were not actually calling our colorization routines, which lead to
printing this very confusing text:

```
<{%reset%}>    --outputs:--<{%reset%}>
```

When running updates with `--diff` or when drilling into details of a
proposed operation, like a refresh.
2019-05-20 11:43:02 -07:00
Matt Ellis
4f693af023 Do not pass arguments as secrets to CheckConfig/Configure
Providers from plugins require that configuration value be
strings. This means if we are passing a secret string to a
provider (for example, trying to configure a kubernetes provider based
on some secret kubeconfig) we need to be careful to remove the
"secretness" before actually making the calls into the provider.

Failure to do this resulted in errors saying that the provider
configuration values had to be strings, and of course, the values
logically where, they were just marked as secret strings

Fixes #2741
2019-05-17 16:42:29 -07:00
Matt Ellis
2cd4409c0d Fix a panic during property diffing
We have to actually return the value we compute instead of just
dropping it on the floor and treating the underlying values as
primitive.

I ran into this during dogfooding, the added test case would
previously panic.
2019-05-15 16:20:25 -07:00
Matt Ellis
ccbc84ecc1 Add an additional test case
This was used as a motivating example during an in person discussion
with Luke.
2019-05-15 12:03:48 -07:00
Matt Ellis
4368830448 Rework secret annotation algorithm slightly
We adopt a new algoritm for annotating secrets, which works as
follows:

If the source and destinations are both property maps, annotate their
secrets deeply.

Otherwise, if there is an property in both the input and output arrays
with the same name and the value in the inputs has secrets /anywhere/
in it, mark the output itself a secret.

This means, for example, an array in the inputs with a secret value as
one of the elmenets will mean in the outputs the entire array value is
marked as a secret. This is done because arrays often are treated as
sets by providers and so we really shouldn't consider ordering. It
also means that if a value is added to the array as part of the
operation we still mark the new array as an output even though the
values may not be indentical to the inputs.
2019-05-15 09:33:02 -07:00
Matt Ellis
af2a2d0f42 Correctly flow secretness across structured values
For providers which do not natively support secrets (which is all of
them today), we annotate output values coming back from the provider
if there is a coresponding secret input in the inputs we passed in.

This logic was not tearing into rich objects, so if you passed a
secret as a member of an array or object into a resource provider, we
would lose the secretness on the way back.

Because of the interaction with Check (where we call Check and then
take the values returned by the provider as inputs for all calls to
Diff/Update), this would apply not only to the Output values of a
resource but also the Inputs (because the secret metadata would not
flow from the inputs of check to the outputs).

This change augments our logic which transfers secrets metadata from
one property map to another to handle these additional cases.
2019-05-15 09:32:25 -07:00
Matt Ellis
145fdd9a7c Fix spelling issues 2019-05-15 08:32:49 -07:00
Matt Ellis
c91ddf996b Do not prompt for passphrase multiple times
The change does two things:

- Reorders some calls in the CLI to prevent trying to create a secrets
  manager twice (which would end up prompting twice).

- Adds a cache inside the passphrase secrets manager such that when
  decrypting a deployment, we can re-use the one created earlier in
  the update. This is sort of a hack, but is needed because otherwise
  we would fail to decrypt the deployment, meaning that if you had a
  secret value in your deployment *and* you were using local
  passphrase encryption *and* you had not set PULUMI_CONFIG_PASSPHRASE
  you would get an error asking you to do so.

Fixes #2729
2019-05-14 23:35:27 -07:00
Matt Ellis
8a865acf11 Don' fail early when loading passphrase secrets managers from state
This is helpful some round trip cases where we many not be able to
build the encrypter or decrypter but we will end up not needing
them. When we fail to load the manager, we return a manager that has
the correct state, but will error when it tries to preform any
operations.  However, if there are no secrets in the deployment, these
methods will never be called and we'll be able to correctly roundtrip
checkpoints even without having access to the password (since there
were no secret values to decrypt or encrypt).
2019-05-10 17:07:52 -07:00
Matt Ellis
ffc6053c88 Fix --json output
We were dropping new and old states on the floor instead of including
them as part of the previewed operation due to a logic error (we want
to append them when there are no errors from serialization, vs when
there are errors).
2019-05-10 17:07:52 -07:00
Matt Ellis
cad1949dba Fix an issue updating a newly created stack from the local backend
When creating a new stack using the local backend, the default
checkpoint has no deployment. That means there's a nil snapshot
created, which means our strategy of using the base snapshot's secrets
manager was not going to work. Trying to do so would result in a panic
because the baseSnapshot is nil in this case.

Using the secrets manager we are going to use to persist the snapshot
is a better idea anyhow, as that's what's actually going to be burned
into the deployment when we serialize the snapshot, so let's use that
instead.
2019-05-10 17:07:52 -07:00
Matt Ellis
25453d88bf Fix a bug in out logic for replacing secrets with [secret] 2019-05-10 17:07:52 -07:00
Matt Ellis
9926071e19 Set a passphrase in more tests 2019-05-10 17:07:52 -07:00
Matt Ellis
a5ef966caf Update --json output for preview in light of secrets
Replace any secret properties with the string `[secret]` for now. We
can consider allowing something like allowing `--show-secrets` to show
them.
2019-05-10 17:07:52 -07:00
Matt Ellis
f705dde7fb Remove acceptsSecrets from InvokeRequest
In our system, we model secrets as outputs with an additional bit of
metadata that says they are secret. For Read and Register resource
calls, our RPC interface says if the client side of the interface can
handle secrets being returned (i.e. the language SDK knows how to
sniff for the special signiture and resolve the output with the
special bit set).

For Invoke, we have no such model. Instead, we return a `Promise<T>`
where T's shape has just regular property fields.  There's no place
for us to tack the secretness onto, since there are no Outputs.

So, for now, don't even return secret values back across the invoke
channel. We can still take them as arguments (which is good) but we
can't even return secrets as part of invoke calls. This is not ideal,
but given the way we model these sources, there's no way around
this.  Fortunately, the result of these invoke calls are not stored in
the checkpoint and since the type is not Output<T> it will be clear
that the underlying value is just present in plaintext. A user that
wants to pass the result of an invoke into a resource can turn an
existing property into a secret via `pulumi.secret`.
2019-05-10 17:07:52 -07:00
Matt Ellis
cb59c21c01 Rename SecretOutputs to AdditionalSecretOutputs
This makes the intention of this field clearer.
2019-05-10 17:07:52 -07:00
Matt Ellis
b7fbe74404 Remove errant import 2019-05-10 17:07:52 -07:00
Matt Ellis
70e16a2acd Allow using the passphrase secrets manager with the pulumi service
This change allows using the passphrase secrets manager when creating
a stack managed by the Pulumi service.  `pulumi stack init`, `pulumi
new` and `pulumi up` all learned a new optional argument
`--secrets-provider` which can be set to "passphrase" to force the
passphrase based secrets provider to be used.  When unset the default
secrets provider is used based on the backend (for local stacks this
is passphrase, for remote stacks, it is the key managed by the pulumi
service).

As part of this change, we also initialize the secrets manager when a
stack is created, instead of waiting for the first time a secret
config value is stored. We do this so that if an update is run using
`pulumi.secret` before any secret configuration values are used, we
already have the correct encryption method selected for a stack.
2019-05-10 17:07:52 -07:00
Matt Ellis
e5d3a20399 Use "passphrase" and "service" instead of "local" and "cloud" 2019-05-10 17:07:52 -07:00
Matt Ellis
88012c4d96 Enable "cloud" and "local" secrets managers across the system
We move the implementations of our secrets managers in to
`pkg/secrets` (which is where the base64 one lives) and wire their use
up during deserialization.

It's a little unfortunate that for the passphrase based secrets
manager, we have to require `PULUMI_CONFIG_PASSPHRASE` when
constructing it from state, but we can make more progress with the
changes as they are now, and I think we can come up with some ways to
mitigate this problem a bit (at least make it only a problem for cases
where you are trying to take a stack reference to another stack that
is managed with local encryption).
2019-05-10 17:07:52 -07:00
Matt Ellis
207219dc9f Remove unused method
Logs are no longer provided by the service (this is a holdover from
the PPC days where service deployments where done in the cloud and it
handled collecting logs).

Removing this breaks another cycle that would be introduced with the
next change (in our test code)
2019-05-10 17:07:52 -07:00
Matt Ellis
6278c1c8d9 Do not depend on backend package from client package
The next change is going to do some code motion that would create some
circular imports if we did not do this. There was nothing that
required the members we were moving be in the backend package, so it
was easy enough to pull them out.
2019-05-10 17:07:52 -07:00
Matt Ellis
e7e934a59a Push initialization of SecretsManager out of the backend
When preforming an update, require that a secrets manager is passed in
as part of the `backend.UpdateOperation` bag and use it.  The CLI now
passes this in (it still uses the default base64 secrets manager, so
this is just code motion into a high layer, since the CLI will be the
one to choose what secrets manager to use based on project settings).
2019-05-10 17:07:52 -07:00
Matt Ellis
307ee72b5f Use existing secrets manager when roundtripping
There are a few operations we do (stack rename, importing and edits)
where we will materialize a `deploy.Snapshot` from an existing
deployment, mutate it in somewhay, and then store it.

In these cases, we will just re-use the secrets manager that was used
to build the snapshot when we re-serialize it. This is less than ideal
in some cases, because many of these operations could run on an
"encrypted" copy of the Snapshot, where Inputs and Outputs have not
been decrypted.

Unfortunately, our system now is not set up in a great way to support
this and adding something like a `deploy.EncryptedSnapshot` would
require large scale code duplications.

So, for now, we'll take the hit of decrypting and re-encrypting, but
long term introducing a `deploy.EncryptedSnapshot` may be nice as it
would let us elide the encryption/decryption steps in some places and
would also make it clear what parts of our system have access to the
plaintext values of secrets.
2019-05-10 17:07:52 -07:00
Matt Ellis
db18ee3905 Retain the SecretsManager that was used to deserialize a deployment
We have many cases where we want to do the following:

deployment -> snapshot -> process snapshot -> deployment

We now retain information in the snapshot about the secrets manager
that was used to construct it, so in these round trip cases, we can
re-use the existing manager.
2019-05-10 17:07:52 -07:00
Matt Ellis
480a2f6c9e Augment secret outputs based on per request options 2019-05-10 17:07:52 -07:00
Matt Ellis
b606b3091d Allow passing a nil SecretsManager to SerializeDeployment
When nil, it means no information is retained in the deployment about
the manager (as there is none) and any attempt to persist secret
values fails.

This should only be used in cases where the snapshot is known to not
contain secret values.
2019-05-10 17:07:52 -07:00
Matt Ellis
67bb134c28 Don't return serialized outputs from stack.GetRooStacktResource
Half of the call sites didn't care about these values and with the
secrets work the ergonmics of calling this method when it has to
return serialized ouputs isn't great. Move the serialization for this
into the CLI itself, as it was the only place that cared to do
this (so it could display things to end users).
2019-05-10 17:07:52 -07:00
Matt Ellis
d341b4e000 Don't track a stack's configuration file in the backend
The previous changes to remove config loading out of the backend means
that the backends no longer need to track this information, as they
never use it.
2019-05-10 17:07:52 -07:00
Matt Ellis
10792c417f Remove backend.GetStackCrypter
As part of the pluggable secrets work, the crypter's used for secrets
are no longer tied to a backend. To enforce this, we remove the
`backend.GetStackCrypter` function and then have the relevent logic to
construct one live inside the CLI itself.

Right now the CLI still uses the backend type to decide what Crypter
to build, but we'll change that shortly.
2019-05-10 17:07:52 -07:00
Matt Ellis
5cde8e416a Rename base64sm to b64 2019-05-10 17:07:52 -07:00
Matt Ellis
97902ee50b Refactor config loading out of the backend
We require configuration to preform updates (as well as previews,
destroys and refreshes). Because of how everything evolved, loading
this configuration (and finding the coresponding decrypter) was
implemented in both the file and http backends, which wasn't great.

Refactor things such that the CLI itself builds out this information
and passes it along to the backend to preform operations. This means
less code duplicated between backends and less places the backend
assume things about the existence of `Pulumi.yaml` files and in
general makes the interface more plesent to use for others uses.
2019-05-10 17:07:52 -07:00
Matt Ellis
d076bad1a5 Remove Config() from backend.Stack
For cloud backed stacks, this was already returning nil and due to the
fact that we no longer include config in the checkpoint for local
stacks, it was nil there as well.

Removing this helps clean stuff up and is should make some future
refactorings around custom secret managers easier to land.

We can always add it back later if we miss it (and make it actually do
the right thing!)
2019-05-10 17:07:52 -07:00
Matt Ellis
cc74ef8471 Encrypt secret values in deployments
When constructing a Deployment (which is a plaintext representation of
a Snapshot), ensure that we encrypt secret values. To do so, we
introduce a new type `secrets.Manager` which is able to encrypt and
decrypt values. In addition, it is able to reflect information about
itself that can be stored in the deployment such that we can
deserialize the deployment into a snapshot (decrypting the values in
the process) without external knowledge about how it was encrypted.

The ability to do this is import for allowing stack references to
work, since two stacks may not use the same manager (or they will use
the same type of manager, but have different state).

The state value is stored in plaintext in the deployment, so it **must
not** contain sensitive data.

A sample manager, which just base64 encodes and decodes strings is
provided, as it useful for testing. We will allow it to be varried
soon.
2019-05-10 17:07:52 -07:00
Matt Ellis
294df77703 Retain secrets for unenlightented providers
When a provider does not natively understand secrets, we need to pass
inputs as raw values, as to not confuse it.

This leads to a not great experience by default, where we pass raw
values to `Check` and then use the results as the inputs to remaining
operations. This means that by default, we don't end up retaining
information about secrets in the checkpoint, since the call to `Check`
erases all of our information about secrets.

To provide a nicer experience we were don't lose information about
secrets even in cases where providers don't natively understand them,
we take property maps produced by the provider and mark any values in
them that are not listed as secret as secret if the coresponding input
was a secret.

This ensures that any secret property values in the inputs are
reflected back into the outputs, even for providers that don't
understand secrets natively.
2019-05-10 17:07:52 -07:00
Matt Ellis
529645194e Track secrets inside the engine
A new `Secret` property value is introduced, and plumbed across the
engine.

- When Unmarshalling properties /from/ RPC calls, we instruct the
  marshaller to retain secrets, since we now understand them in the
  rest of the engine.

- When Marshalling properties /to/ RPC calls, we use or tracked data
  to understand if the other side of the connection can accept
  secrets. If they can, we marshall them in a similar manner to assets
  where we have a special object with a signiture specific for secrets
  and an underlying value (which is the /plaintext/ value). In cases
  where the other end of the connection does not understand secrets,
  we just drop the metadata and marshal the underlying value as we
  normally would.

- Any secrets that are passed across the engine events boundary are
  presently passed as just `[secret]`.

- When persisting secret values as part of a deployment, we use a rich
  object so that we can track the value is a secret, but right now the
  underlying value is not actually encrypted.
2019-05-10 17:07:52 -07:00
Matt Ellis
9623293f64 Implement new RPC endpoints 2019-05-10 17:07:52 -07:00