pulumi/sdk/nodejs/config.ts

173 lines
5.7 KiB
TypeScript
Raw Normal View History

Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
// Copyright 2016-2017, Pulumi Corporation. All rights reserved.
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
import { RunError } from "./errors";
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
import * as runtime from "./runtime";
/**
* Config is a bag of related configuration state. Each bag contains any number of configuration variables, indexed by
* simple keys, and each has a name that uniquely identifies it; two bags with different names do not share values for
* variables that otherwise share the same key. For example, a bag whose name is `pulumi:foo`, with keys `a`, `b`,
* and `c`, is entirely separate from a bag whose name is `pulumi:bar` with the same simple key names. Each key has a
* fully qualified names, such as `pulumi:foo:a`, ..., and `pulumi:bar:a`, respectively.
*/
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
export class Config {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
/**
* name is the configuration bag's logical name and uniquely identifies it.
*/
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
public readonly name: string;
constructor(name: string) {
this.name = name;
}
/**
* get loads an optional configuration value by its key, or undefined if it doesn't exist.
*
* @param key The key to lookup.
*/
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
public get(key: string): string | undefined {
return runtime.getConfig(this.fullKey(key));
}
/**
* getBoolean loads an optional configuration value, as a boolean, by its key, or undefined if it doesn't exist.
* If the configuration value isn't a legal boolean, this function will throw an error.
*
* @param key The key to lookup.
*/
public getBoolean(key: string): boolean | undefined {
const v: string | undefined = this.get(key);
if (v === undefined) {
return undefined;
} else if (v === "true") {
return true;
} else if (v === "false") {
return false;
}
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigTypeError(this.fullKey(key), v, "boolean");
}
/**
* getNumber loads an optional configuration value, as a number, by its key, or undefined if it doesn't exist.
* If the configuration value isn't a legal number, this function will throw an error.
*
* @param key The key to lookup.
*/
public getNumber(key: string): number | undefined {
const v: string | undefined = this.get(key);
if (v === undefined) {
return undefined;
}
const f: number = parseFloat(v);
if (isNaN(f)) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigTypeError(this.fullKey(key), v, "number");
}
return f;
}
/**
* getObject loads an optional configuration value, as an object, by its key, or undefined if it doesn't exist.
* This routine simply JSON parses and doesn't validate the shape of the contents.
*
* @param key The key to lookup.
*/
public getObject<T>(key: string): T | undefined {
const v: string | undefined = this.get(key);
if (v === undefined) {
return undefined;
}
try {
return <T>JSON.parse(v);
}
catch (err) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigTypeError(this.fullKey(key), v, "JSON object");
}
}
/**
* require loads a configuration value by its given key. If it doesn't exist, an error is thrown.
*
* @param key The key to lookup.
*/
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
public require(key: string): string {
const v: string | undefined = this.get(key);
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
if (v === undefined) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigMissingError(this.fullKey(key));
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
}
return v;
}
/**
* requireBoolean loads a configuration value, as a boolean, by its given key. If it doesn't exist, or the
* configuration value is not a legal boolean, an error is thrown.
*
* @param key The key to lookup.
*/
public requireBoolean(key: string): boolean {
const v: boolean | undefined = this.getBoolean(key);
if (v === undefined) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigMissingError(this.fullKey(key));
}
return v;
}
/**
* requireNumber loads a configuration value, as a number, by its given key. If it doesn't exist, or the
* configuration value is not a legal number, an error is thrown.
*
* @param key The key to lookup.
*/
public requireNumber(key: string): number {
const v: number | undefined = this.getNumber(key);
if (v === undefined) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigMissingError(this.fullKey(key));
}
return v;
}
/**
* requireObject loads a configuration value, as a number, by its given key. If it doesn't exist, or the
* configuration value is not a legal number, an error is thrown.
*
* @param key The key to lookup.
*/
public requireObject<T>(key: string): T {
const v: T | undefined = this.getObject<T>(key);
if (v === undefined) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
throw new ConfigMissingError(this.fullKey(key));
}
return v;
}
/**
* fullKey turns a simple configuration key into a fully resolved one, by prepending the bag's name.
*
* @param key The key to lookup.
*/
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
private fullKey(key: string): string {
return `${this.name}:${key}`;
}
}
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
/**
* ConfigTypeError is used when a configuration value is of the wrong type.
*/
class ConfigTypeError extends RunError {
constructor(key: string, v: any, expectedType: string) {
super(`Configuration '${key}' value '${v}' is not a valid ${expectedType}`);
}
}
/**
* ConfigMissingError is used when a configuration value is completely missing.
*/
class ConfigMissingError extends RunError {
constructor(public key: string) {
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
super(
`Missing required configuration variable '${key}'\n` +
`\tplease set a value using the command \`pulumi config text ${key} <value>\``,
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
);
}
}