pulumi/sdk/proto/resource.proto

108 lines
6.7 KiB
Protocol Buffer
Raw Normal View History

2018-05-22 21:43:36 +02:00
// Copyright 2016-2018, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
syntax = "proto3";
import "google/protobuf/empty.proto";
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
import "google/protobuf/struct.proto";
import "provider.proto";
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
package pulumirpc;
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
// ResourceMonitor is the interface a source uses to talk back to the planning monitor orchestrating the execution.
service ResourceMonitor {
rpc SupportsFeature(SupportsFeatureRequest) returns (SupportsFeatureResponse) {}
rpc Invoke(InvokeRequest) returns (InvokeResponse) {}
rpc ReadResource(ReadResourceRequest) returns (ReadResourceResponse) {}
rpc RegisterResource(RegisterResourceRequest) returns (RegisterResourceResponse) {}
rpc RegisterResourceOutputs(RegisterResourceOutputsRequest) returns (google.protobuf.Empty) {}
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
}
// SupportsFeatureRequest allows a client to test if the resource monitor supports a certain feature, which it may use
// to control the format or types of messages it sends.
message SupportsFeatureRequest {
string id = 1; // the ID of the feature to test support for.
}
message SupportsFeatureResponse {
bool hasSupport = 1; // true when the resource monitor supports this feature.
}
// There is a clear distinction here between the "properties" bag sent across the wire as part of these RPCs and
// properties that exist on Pulumi resources as projected into the target language. It is important to call out that the
// properties here are in the format that a provider will expect. This is to say that they are usually in camel case.
// If a language wants to project properties in a format *other* than camel-case, it is the job of the language to
// ensure that the properties are translated into camel case before invoking an RPC.
// ReadResourceRequest contains enough information to uniquely qualify and read a resource's state.
message ReadResourceRequest {
string id = 1; // the ID of the resource to read.
string type = 2; // the type of the resource object.
string name = 3; // the name, for URN purposes, of the object.
string parent = 4; // an optional parent URN that this child resource belongs to.
google.protobuf.Struct properties = 5; // optional state sufficient to uniquely identify the resource.
repeated string dependencies = 6; // a list of URNs that this read depends on, as observed by the language host.
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 02:50:29 +02:00
string provider = 7; // an optional reference to the provider to use for this read.
string version = 8; // the version of the provider to use when servicing this request.
bool acceptSecrets = 9; // when true operations should return secrets as strongly typed.
repeated string additionalSecretOutputs = 10; // a list of output properties that should also be treated as secret, in addition to ones we detect.
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 08:01:01 +02:00
repeated string aliases = 11; // a list of additional URNs that shoud be considered the same.
}
// ReadResourceResponse contains the result of reading a resource's state.
message ReadResourceResponse {
string urn = 1; // the URN for this resource.
google.protobuf.Struct properties = 2; // the state of the resource read from the live environment.
}
// RegisterResourceRequest contains information about a resource object that was newly allocated.
message RegisterResourceRequest {
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 18:46:30 +01:00
// PropertyDependencies describes the resources that a particular property depends on.
message PropertyDependencies {
repeated string urns = 1; // A list of URNs this property depends on.
}
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
string type = 1; // the type of the object allocated.
string name = 2; // the name, for URN purposes, of the object.
Switch to parent pointers; display components nicely This change switches from child lists to parent pointers, in the way resource ancestries are represented. This cleans up a fair bit of the old parenting logic, including all notion of ambient parent scopes (and will notably address pulumi/pulumi#435). This lets us show a more parent/child display in the output when doing planning and updating. For instance, here is an update of a lambda's text, which is logically part of a cloud timer: * cloud:timer:Timer: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:timer:Timer::lm-cts-malta-job-CleanSnapshots] * cloud:function:Function: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:function:Function::lm-cts-malta-job-CleanSnapshots] * aws:serverless:Function: (same) [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots] ~ aws:lambda/function:Function: (modify) [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741] [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots] - code : archive(assets:2092f44) { // etc etc etc Note that we still get walls of text, but this will be actually quite nice when combined with pulumi/pulumi#454. I've also suppressed printing properties that didn't change during updates when --detailed was not passed, and also suppressed empty strings and zero-length arrays (since TF uses these as defaults in many places and it just makes creation and deletion quite verbose). Note that this is a far cry from everything we can possibly do here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417). But it's a good start towards taming some of our output spew.
2017-11-17 03:21:41 +01:00
string parent = 3; // an optional parent URN that this child resource belongs to.
bool custom = 4; // true if the resource is a custom, managed by a plugin's CRUD operations.
Implement components This change implements core support for "components" in the Pulumi Fabric. This work is described further in pulumi/pulumi#340, where we are still discussing some of the finer points. In a nutshell, resources no longer imply external providers. It's entirely possible to have a resource that logically represents something but without having a physical manifestation that needs to be tracked and managed by our typical CRUD operations. For example, the aws/serverless/Function helper is one such type. It aggregates Lambda-related resources and exposes a nice interface. All of the Pulumi Cloud Framework resources are also examples. To indicate that a resource does participate in the usual CRUD resource provider, it simply derives from ExternalResource instead of Resource. All resources now have the ability to adopt children. This is purely a metadata/tagging thing, and will help us roll up displays, provide attribution to the developer, and even hide aspects of the resource graph as appropriate (e.g., when they are implementation details). Our use of this capability is ultra limited right now; in fact, the only place we display children is in the CLI output. For instance: + aws:serverless:Function: (create) [urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda] => urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole => urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0 => urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda The bit indicating whether a resource is external or not is tracked in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
google.protobuf.Struct object = 5; // an object produced by the interpreter/source.
bool protect = 6; // true if the resource should be marked protected.
repeated string dependencies = 7; // a list of URNs that this resource depends on, as observed by the language host.
Implement first-class providers. (#1695) ### First-Class Providers These changes implement support for first-class providers. First-class providers are provider plugins that are exposed as resources via the Pulumi programming model so that they may be explicitly and multiply instantiated. Each instance of a provider resource may be configured differently, and configuration parameters may be source from the outputs of other resources. ### Provider Plugin Changes In order to accommodate the need to verify and diff provider configuration and configure providers without complete configuration information, these changes adjust the high-level provider plugin interface. Two new methods for validating a provider's configuration and diffing changes to the same have been added (`CheckConfig` and `DiffConfig`, respectively), and the type of the configuration bag accepted by `Configure` has been changed to a `PropertyMap`. These changes have not yet been reflected in the provider plugin gRPC interface. We will do this in a set of follow-up changes. Until then, these methods are implemented by adapters: - `CheckConfig` validates that all configuration parameters are string or unknown properties. This is necessary because existing plugins only accept string-typed configuration values. - `DiffConfig` either returns "never replace" if all configuration values are known or "must replace" if any configuration value is unknown. The justification for this behavior is given [here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106) - `Configure` converts the config bag to a legacy config map and configures the provider plugin if all config values are known. If any config value is unknown, the underlying plugin is not configured and the provider may only perform `Check`, `Read`, and `Invoke`, all of which return empty results. We justify this behavior becuase it is only possible during a preview and provides the best experience we can manage with the existing gRPC interface. ### Resource Model Changes Providers are now exposed as resources that participate in a stack's dependency graph. Like other resources, they are explicitly created, may have multiple instances, and may have dependencies on other resources. Providers are referred to using provider references, which are a combination of the provider's URN and its ID. This design addresses the need during a preview to refer to providers that have not yet been physically created and therefore have no ID. All custom resources that are not themselves providers must specify a single provider via a provider reference. The named provider will be used to manage that resource's CRUD operations. If a resource's provider reference changes, the resource must be replaced. Though its URN is not present in the resource's dependency list, the provider should be treated as a dependency of the resource when topologically sorting the dependency graph. Finally, `Invoke` operations must now specify a provider to use for the invocation via a provider reference. ### Engine Changes First-class providers support requires a few changes to the engine: - The engine must have some way to map from provider references to provider plugins. It must be possible to add providers from a stack's checkpoint to this map and to register new/updated providers during the execution of a plan in response to CRUD operations on provider resources. - In order to support updating existing stacks using existing Pulumi programs that may not explicitly instantiate providers, the engine must be able to manage the "default" providers for each package referenced by a checkpoint or Pulumi program. The configuration for a "default" provider is taken from the stack's configuration data. The former need is addressed by adding a provider registry type that is responsible for managing all of the plugins required by a plan. In addition to loading plugins froma checkpoint and providing the ability to map from a provider reference to a provider plugin, this type serves as the provider plugin for providers themselves (i.e. it is the "provider provider"). The latter need is solved via two relatively self-contained changes to plan setup and the eval source. During plan setup, the old checkpoint is scanned for custom resources that do not have a provider reference in order to compute the set of packages that require a default provider. Once this set has been computed, the required default provider definitions are conjured and prepended to the checkpoint's resource list. Each resource that requires a default provider is then updated to refer to the default provider for its package. While an eval source is running, each custom resource registration, resource read, and invoke that does not name a provider is trapped before being returned by the source iterator. If no default provider for the appropriate package has been registered, the eval source synthesizes an appropriate registration, waits for it to complete, and records the registered provider's reference. This reference is injected into the original request, which is then processed as usual. If a default provider was already registered, the recorded reference is used and no new registration occurs. ### SDK Changes These changes only expose first-class providers from the Node.JS SDK. - A new abstract class, `ProviderResource`, can be subclassed and used to instantiate first-class providers. - A new field in `ResourceOptions`, `provider`, can be used to supply a particular provider instance to manage a `CustomResource`'s CRUD operations. - A new type, `InvokeOptions`, can be used to specify options that control the behavior of a call to `pulumi.runtime.invoke`. This type includes a `provider` field that is analogous to `ResourceOptions.provider`.
2018-08-07 02:50:29 +02:00
string provider = 8; // an optional reference to the provider to manage this resource's CRUD operations.
Implement more precise delete-before-replace semantics. (#2369) This implements the new algorithm for deciding which resources must be deleted due to a delete-before-replace operation. We need to compute the set of resources that may be replaced by a change to the resource under consideration. We do this by taking the complete set of transitive dependents on the resource under consideration and removing any resources that would not be replaced by changes to their dependencies. We determine whether or not a resource may be replaced by substituting unknowns for input properties that may change due to deletion of the resources their value depends on and calling the resource provider's Diff method. This is perhaps clearer when described by example. Consider the following dependency graph: A __|__ B C | _|_ D E F In this graph, all of B, C, D, E, and F transitively depend on A. It may be the case, however, that changes to the specific properties of any of those resources R that would occur if a resource on the path to A were deleted and recreated may not cause R to be replaced. For example, the edge from B to A may be a simple dependsOn edge such that a change to B does not actually influence any of B's input properties. In that case, neither B nor D would need to be deleted before A could be deleted. In order to make the above algorithm a reality, the resource monitor interface has been updated to include a map that associates an input property key with the list of resources that input property depends on. Older clients of the resource monitor will leave this map empty, in which case all input properties will be treated as depending on all dependencies of the resource. This is probably overly conservative, but it is less conservative than what we currently implement, and is certainly correct.
2019-01-28 18:46:30 +01:00
map<string, PropertyDependencies> propertyDependencies = 9; // a map from property keys to the dependencies of the property.
bool deleteBeforeReplace = 10; // true if this resource should be deleted before replacement.
string version = 11; // the version of the provider to use when servicing this request.
repeated string ignoreChanges = 12; // a list of property selectors to ignore during updates.
bool acceptSecrets = 13; // when true operations should return secrets as strongly typed.
repeated string additionalSecretOutputs = 14; // a list of output properties that should also be treated as secret, in addition to ones we detect.
Support aliases for renaming, re-typing, or re-parenting resources (#2774) Adds a new resource option `aliases` which can be used to rename a resource. When making a breaking change to the name or type of a resource or component, the old name can be added to the list of `aliases` for a resource to ensure that existing resources will be migrated to the new name instead of being deleted and replaced with the new named resource. There are two key places this change is implemented. The first is the step generator in the engine. When computing whether there is an old version of a registered resource, we now take into account the aliases specified on the registered resource. That is, we first look up the resource by its new URN in the old state, and then by any aliases provided (in order). This can allow the resource to be matched as a (potential) update to an existing resource with a different URN. The second is the core `Resource` constructor in the JavaScript (and soon Python) SDKs. This change ensures that when a parent resource is aliased, that all children implicitly inherit corresponding aliases. It is similar to how many other resource options are "inherited" implicitly from the parent. Four specific scenarios are explicitly tested as part of this PR: 1. Renaming a resource 2. Adopting a resource into a component (as the owner of both component and consumption codebases) 3. Renaming a component instance (as the owner of the consumption codebase without changes to the component) 4. Changing the type of a component (as the owner of the component codebase without changes to the consumption codebase) 4. Combining (1) and (3) to make both changes to a resource at the same time
2019-06-01 08:01:01 +02:00
repeated string aliases = 15; // a list of additional URNs that shoud be considered the same.
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
}
// RegisterResourceResponse is returned by the engine after a resource has finished being initialized. It includes the
// auto-assigned URN, the provider-assigned ID, and any other properties initialized by the engine.
message RegisterResourceResponse {
string urn = 1; // the URN assigned by the engine.
string id = 2; // the unique ID assigned by the provider.
google.protobuf.Struct object = 3; // the resulting object properties, including provider defaults.
bool stable = 4; // if true, the object's state is stable and may be trusted not to change.
repeated string stables = 5; // an optional list of guaranteed-stable properties.
}
// RegisterResourceOutputsRequest adds extra resource outputs created by the program after registration has occurred.
message RegisterResourceOutputsRequest {
string urn = 1; // the URN for the resource to attach output properties to.
google.protobuf.Struct outputs = 2; // additional output properties to add to the existing resource.
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
}