Commit graph

374 commits

Author SHA1 Message Date
pat@pulumi.com 597db186ec Renames: plan -> preview, deploy -> push.
Part of #353.

These changes also remove all command aliases from the `pulumi` command.
2017-09-22 15:28:03 -07:00
Joe Duffy f6e694c72b Rename pulumi-fabric to pulumi
This includes a few changes:

* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.

* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.

* The CLI is renamed from lumi to pulumi.
2017-09-21 19:18:21 -07:00
Luke Hoban c5558e2778 Convert sdk documentation to TypeDoc/JsDoc 2017-09-21 18:15:29 -07:00
Matt Ellis fcc81bac24 Fix nativeruntime module build on Windows
There were two problems:

- node-gyp configure was failing because of different shell syntax
between windows and *nix.
- MSVC 2015 is not smart enough to understand our use of strlen actually
results in a constant value and prevents us from using it to create an
array, move to a macro based solution.
2017-09-21 11:49:03 -07:00
joeduffy 1c2c972d37 Add back Computed<T> as a short-hand
This adds back Computed<T> as a short-hand for Promise<T | undefined>.
Subtly, all resource properties need to permit undefined flowing through
during planning  Rather than forcing the long-hand version, which is easy
to forget, we'll keep the convention of preferring Computed<T>.  It's
just a typedef and the runtime type is just a Promise.
2017-09-20 09:59:32 -07:00
joeduffy f8ee6c570e Eliminate Computed/Property in favor of Promises
As part of pulumi/pulumi-fabric#331, we've been exploring just using
undefined to indicate that a property value is absent during planning.
We also considered blocking the message loop to simplify the overall
programming model, so that all asynchrony is hidden.

It turns out ThereBeDragons 🐲 anytime you try to block the
message loop.  So, we aren't quite sure about that bit.

But the part we are convicted about is that this Computed/Property
model is far too complex.  Furthermore, it's very close to promises, and
yet frustratingly so far away.  Indeed, the original thinking in
pulumi/pulumi-fabric#271 was simply to use promises, but we wanted to
encourage dataflow styles, rather than control flow.  But we muddied up
our thinking by worrying about awaiting a promise that would never resolve.

It turns out we can achieve a middle ground: resolve planning promises to
undefined, so that they don't lead to hangs, but still use promises so
that asynchrony is explicit in the system.  This also avoids blocking the
message loop.  Who knows, this may actually be a fine final destination.
2017-09-20 09:59:32 -07:00
joeduffy 22387d24cd Switch to a --parallel=P flag
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism.  We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized.  We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it.  And in any case, the --parallel=P capability will be useful.
2017-09-17 08:10:46 -07:00
joeduffy 087deb7643 Add optional dependsOn to Resource constructors
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources.  We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case.  Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.

This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106.  I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways.  This tends to be the way most tools (like Make) operate.

This fixes pulumi/pulumi-fabric#335.
2017-09-15 16:38:52 -07:00
joeduffy 94207acf65 Upgrade to TypeScript ^2.5.2 2017-09-14 15:02:41 -07:00
joeduffy 4c781da93b Add instructions for make configure
And also move the Node.js SDK-specific parts into the sdk/nodejs/ directory.
2017-09-11 15:17:11 -07:00
joeduffy ac786ed2c9 Clean up the READMEs 2017-09-11 13:18:09 -07:00
Luke Hoban 4704ac75fc Use curl instead of wget
Since wget is not installed by default on MacOS.
2017-09-11 09:33:54 -07:00
joeduffy cf7ba79f81 Skip __awaiter this captures 2017-09-10 09:22:04 -07:00
joeduffy 443e1b9053 Use hasOwnProperty in case e.json is undefined 2017-09-10 08:24:02 -07:00
joeduffy 8ce07617c9 Implement recursive closure captures
This change implements recursive closure captures.  This permits
cases like the following

    {
        function f() { g(); }
        function g() { f(); }
    }

and the slightly more useful

    class C {
        this.x = 42;
        this.f = () => x;
    }

To do this requires caching the environment objects and permitting
cycles in the resulting environment graph.  The closure emitter code
already knows how to handle this.

In addition, we must mark captures of `this` as free variables.

This resolves pulumi/pulumi-fabric#333.
2017-09-10 07:40:53 -07:00
joeduffy 07f2c92c84 Add a custom unhandled promise error handler 2017-09-09 18:10:13 -07:00
joeduffy 1a64fc4bf3 Keepalive the RPC client until logs finish
This ensures RPC channels stay alive until logs finish.  It also
makes provisions for logs that come in *after* shutdown has begun,
but before it has finished, by observing that the keepalive promise
has changed between the time of initiating the callback and running it.
2017-09-09 14:09:21 -07:00
joeduffy 871943abfc Dial back the debug output slightly 2017-09-09 13:49:50 -07:00
joeduffy f9995159c6 Fix a handful of things, mostly logging
* Initialize the diganostics logger with opts.Debug when doing
  a Deploy, like we do Plan.

* Don't spew leaked promises if there were Log.errors.

* Serialize logging RPC calls so that they can't appear out of order.

* Print stack traces in more places and, in particular, remember
  the original context for any errors that may occur asynchronously,
  like resource registration and calls to mapValue.

* Include origin stack traces generally in more error messages.

* Add some more mapValue test cases.

* Only undefined-propagate mapValue values during dry-runs.
2017-09-09 13:43:51 -07:00
joeduffy f75b465052 Add some contextual error information 2017-09-09 11:19:35 -07:00
joeduffy 0147d487bd Serialize all resource operations
This change serializes all resource operations.  Please see
pulumi/pulumi#335 for more details.  In a nutshell, there are
resources that have implicit hidden dependencies and now that
the runtime is fully asynchronous, we are tripping over problems
left and right (even worse, they are non-deterministic).  All
of the problems have been in the AWS API Gateway resources;
until we come up with a holistic solution here, serializing all
calls should make things more stable in the interim.
2017-09-09 10:32:25 -07:00
joeduffy 8aba3aae12 Upgrade gRPC to 1.6.0; use full addresses
This change upgrades gRPC to 1.6.0 to pick up a few bug fixes.

We also use the full address for gRPC endpoints, including the
interface name, as otherwise we pick the wrong interface on Linux.
2017-09-09 07:37:10 -07:00
joeduffy 67e5750742 Fix a bunch of Linux issues
There's a fair bit of clean up in here, but the meat is:

* Allocate the language runtime gRPC client connection on the
  goroutine that will use it; this eliminates race conditions.

* The biggie: there *appears* to be a bug in gRPC's implementation
  on Linux, where it doesn't implement WaitForReady properly.  The
  behavior I'm observing is that RPC calls will not retry as they
  are supposed to, but will instead spuriously fail during the RPC
  startup.  To work around this, I've added manual retry logic in
  the shared plugin creation function so that we won't even try
  to use the client connection until it is in a well-known state.
  pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
  ideally, removing the work around.

The other minor things are:

* Separate run.js into its own module, so it doesn't include
  index.js and do a bunch of random stuff it shouldn't be doing.

* Allow run.js to be invoked without a --monitor.  This makes
  testing just the run part of invocation easier (including
  config, which turned out to be super useful as I was debugging).

* Tidy up some messages.
2017-09-08 15:11:09 -07:00
joeduffy 6aae028768 Move language host into bin/ 2017-09-08 06:13:09 -07:00
joeduffy 8180914f83 Don't keep the message loop alive during logging 2017-09-07 21:14:29 -07:00
joeduffy 4e96400c9e Only print leaks on successful exits 2017-09-07 15:19:08 -07:00
joeduffy a5a6c79925 Keep RPC connections alive for as long as we need them
The change to tear down RPC connections after the program exits --
to fix problems on Linux presumably due to the way libuv is implemented --
unfortunately introduces nondeterminism and overzealous termination that
can happen at inopportune times.  Instead, we need to wait for the current
RPC queue to drain.  To fix this, we'll maintain a list of currently active
RPC calls and, only once they have completed, will we close the clients.
2017-09-07 14:50:17 -07:00
joeduffy 88a87569f5 Link the bin/ directory
This moves us closer to what we'll have with real NPM packages.
2017-09-07 12:43:12 -07:00
joeduffy b23338d4d1 Disconnect from the host/engine properly 2017-09-07 12:33:43 -07:00
joeduffy dcefa4a9d4 Close gRPC client connections
This change closes the gRPC client connections, as they keep the
Node.js message loop alive on Linux (but, strangely, not Mac;
regardless, a good thing to do anyway...)
2017-09-07 08:32:36 -07:00
joeduffy 6147afb7d1 Fix cp command on Linux 2017-09-07 07:36:10 -07:00
joeduffy 470a519057 Add Promises leak and hang detection
We have an issue in the runtime right now where we serialize closures
asynchronously, meaning we make it possible to form cycles between
resource graphs (something that ought to be impossible in our model,
where resources are "immutable" after creation and cannot form cycles).

Let me tell you a tale of debugging this ...

Well, no, let's not do that.  But thankfully I've left behind some
little utilities that might make debugging such a thing easier down
the road.  Namely:

* By default, most of our core runtime promises leverage a leak handler
  that will log an error message should the process exit with certain
  critical unresolved promises.  This error message will include some
  handy context (like whether it was an input promise) as well as a
  stack trace for its point of creation.

* Optionally, with a flag in runtime/debuggable.ts, you may wire up
  a hang detector, for situations where we may want to detect this
  situation sooner than process exit, using the regular message loop.
  This uses a defined timeout, prints the same diagnostics as the
  leak detector when a hang is detected, and is disabled by default.
2017-09-06 18:35:20 -07:00
joeduffy 93743733fb Explicitly serialize output properties in closures 2017-09-06 14:51:00 -07:00
joeduffy aefe297aa1 Harden dependent resolutions
This fixes a few problems with dependent resolutions and hardens
even more promises-related error paths, so we swallow precisely zero
errors (or at least we hope so).  This also digs through multi-level
chains of promises and computed properties as needed for nested mapValues.
2017-09-06 14:29:17 -07:00
joeduffy 397fea5720 Permit undefined values to flow through 2017-09-06 09:39:16 -07:00
joeduffy d8d94d1df0 Harden error paths and improve messages 2017-09-06 09:36:28 -07:00
joeduffy 7e5b6a564c Let assets/archives contain computeds 2017-09-06 08:59:23 -07:00
joeduffy ca149316fc Block resource creations within mapValue 2017-09-06 08:49:20 -07:00
joeduffy 240b54b5be Add typings and tests for mapValues that return computeds 2017-09-06 08:28:11 -07:00
joeduffy 0f08ef3cda Improve mapValue: log errors, permit Computed<U> returns 2017-09-06 08:10:30 -07:00
joeduffy 6630de503c Support capturing Computed<T>s and Promise<T>s
This change adds support for awaiting any Computed<T> and Promise<T>s
that were captured inside of a function's closure.  This preserves our
ability to capture, for example, resource state that ends up getting
serialized as the final resource state, rather than a snapshot of the
(mostly unresolved) resource state at the time of serialization.
2017-09-06 07:36:19 -07:00
joeduffy cc9a607f01 Move environment entry serialization into JavaScript
This change moves the environment entry serialization logic into
JavaScript, where it's a bit easier to author and maintain.  We
also switch to using Object.keys, so that we only walk the enumerable
properties of objects (to avoid internal member functions and to
generally leverage our current style of writing code).  This is
just a temporary stopgap until we figure out more rigorous semantics
for what it means to serialize entire objects ...
2017-09-05 16:57:23 -07:00
joeduffy fc236ec0b2 Override toString from Property
This is mostly just for debugging purposes, but hopefully makes it
a little clearer that you've done something wrong, vs "[object Object]".
2017-09-05 15:51:05 -07:00
joeduffy 726e48e094 Add an extra test for nested functions 2017-09-05 15:50:47 -07:00
joeduffy 3164572b6e Fix some free variable capture logic
* Use `global.hasOwnProperty(ident)`, rather than `global[ident] !== undefined`,
  to avoid classifying references to globals as free variables.  Surprise(!!),
  the prior logic wouldn't work for `undefined` itself... 😒

* Expand this check to include the built-in Node.js module variables, namely
  `__dirname`, `__filename`, `exports`, `module`, and `require`, so that
  references to them don't get classified as serializable free variables either.

* Place catch variables in scope, so that `catch (err) { ... }` won't yield
  free variables for references to `err` within `...`.

* Place recursive function definitions into the top-level `var`-like scope of
  variables so that we don't consider references to them free.

* Harden all error pathways in the native C++ add-on so that we terminate
  anytime an exception is in-flight, rather than limping along and making
  things worse...
2017-09-05 15:21:14 -07:00
joeduffy 0a78ef0743 Properly report closes due to signals 2017-09-05 12:01:55 -07:00
joeduffy e3a6695399 Depend only on vendored protos 2017-09-05 11:52:33 -07:00
joeduffy 8d3708f34d Use portable cps 2017-09-05 11:39:32 -07:00
joeduffy 4b2a40056e Remove proto/ from sdk/nodejs/ 2017-09-05 11:39:10 -07:00
joeduffy 8826c08116 Fix linting glob 2017-09-05 11:24:38 -07:00
joeduffy d3bd43fea9 Rename PropertyValue<T> to MaybeComputed<T>
As I started rolling this out, I realized that end user code actually
has to use this type sometimes.  And that the current names are inconsistent,
after eschewing Property<T> in favor of Computed<T>.  The new names read better.
2017-09-05 11:14:28 -07:00
joeduffy a1ab56fc28 Prettify properties
This change makes a few simplifications to how properties are exposed in
the system, mostly in the name of usability, but also to feel a bit more
like "idiomatic JavaScript".  Namely:

* Rename `then` to `mapValue`.  This hopefully helps to suggest that this
  is meant for a dataflow style of programming.

* Move Property<T> into the runtime module, and remove PropertyState<T>,
  collapsing back down to a single type.  This also eliminates some of the
  messy internal runtime casting, accessing of internal members, etc.

* Export a Computed<T> interface from the root of the module.  This is
  the entirety of the public-facing surface area for properties, and
  exposes that single `mapValue` member function.  The internal runtime
  logic understands how to handle Property<T>s specifically in addition
  to Computed<T>s more generally (in case someone writes their own).
2017-09-05 10:55:09 -07:00
joeduffy 2e824c0ba5 Reject all but Node.js 6.10.x 2017-09-05 10:08:20 -07:00
joeduffy f2d53459eb Add the notion of stable states
If a resource's planning operation is to do nothing, we can safely
assume that all of its properties are stable.  This can be used during
planning to avoid cascading updates that we know will never happen.
2017-09-05 10:01:00 -07:00
joeduffy 2a22a71116 Tidy up resource properties
This changes a few aspects of resource properties:

* Move all runtime-related goo into the runtime module, in an
  internal PromiseState class.  This encapsulates the internal
  state transitions and protects against misuse.  It also allows
  us to clean up the public API for the Property<T> type so that
  it's entirely suitable for external usage.

* Track input and output property values distinctly.  It turns
  out we want to key off events differently.  For example, to marshal
  property values to a resource provider, we only care about the
  inputs.  For final property values that are used in, say, thens
  or as inputs to other properties, we want the output property value.

* Be more precise about when an output is truly final, and known, or
  unknown due to planning/dry-runs.  Note that this does mean that
  we'll encounter unknown values more frequently because, aside from
  IDs and URNs, we can't say for sure that arbitrary properties will never
  change post-creation.  We have ideas on how to denote this; see
  pulumi/pulumi-fabric#330 for more details.
2017-09-05 09:31:03 -07:00
joeduffy d7c90f12a8 Use yarn to run subcommands 2017-09-04 11:35:21 -07:00
joeduffy b80b6afcf1 Lint the test files 2017-09-04 11:35:21 -07:00
joeduffy 7c7610848f Rename asset classes
This change renames String, File, and Remote to StringAsset, FileAsset,
and RemoteAsset, largely to avoid conflicting with the built-in JavaScript
String type, but also because it mirrors our Archive naming strategy.
2017-09-04 11:35:21 -07:00
joeduffy f3cf73d790 Change plugin prefixes to "pulumi-" 2017-09-04 11:35:21 -07:00
joeduffy ee7fc0a8c5 Fix a few things in the SDK
This fixes a few things in the SDK preventing deployments (versus plans):

* Don't fully resolve when a link resolves.  This will be handled during
  the final completion of the resource state.

* Skip "id" and "urn" for property resolution, since they are handled
  explicitly based on the RPC messages.  The "id" is often in the response
  payload because Terraform stores it as a property.  We don't need it.

* Lazily allocate Property<T> objects if necessary when the response
  from the resulting resource operation comes back.

* Improve a few error messages.
2017-09-04 11:35:21 -07:00
joeduffy f718ab6501 Add a runtime.Log class
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways.  Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
2017-09-04 11:35:21 -07:00
joeduffy 311550b5e9 Don't copy .node-gyp innards
We don't actually need to copy the headers, becasue the include
path order for the GYP-generated project files will include them
in the correct order.  This simplifies the script and ordering.
2017-09-04 11:35:21 -07:00
joeduffy 3ff10edcc4 Add a make configure target
This change adds a `make configure` target, which handles preparing
the environment for building the project.  This includes existing
steps, like dep ensure and yarn installing the Node.js SDK NPM
dependencies, and also includes downloading the right Node.js/V8
includes, putting them in the right place, and then generating the
appropriate node-gyp project files that reference those includes.
2017-09-04 11:35:21 -07:00
joeduffy d7688da5e3 Fix a few minor pathing things 2017-09-04 11:35:21 -07:00
joeduffy 3427647f93 Implement free variable calculations
This change implements free variable calculations and wires it up
to closure serialization.  This is recursive, in the sense that
the serializer may need to call back to fetch free variables for
nested functions encountered during serialization.

The free variable calculation works by parsing the serialized
function text and walking the AST, applying the usual scoping rules
to determine what is free.  In particular, it respects nested
function boundaries, and rules around var, let, and const scoping.

We are using Acorn to perform the parsing.  I'd originally gone
down the path of using V8, so that we have one consistent parser
in the game, however unfortunately neither V8's parser nor its AST
is a stable API meant for 3rd parties.  Unlike the exising internal
V8 dependencies, this one got very deep very quickly, and I became
nervous about maintaining all those dependencies.  Furthermore,
by doing it this way, we can write the free variable logic in
JavaScript, which means one fewer C++ component to maintain.

This also includes a fairly significant amount of testing, all
of which passes! 🎉
2017-09-04 11:35:21 -07:00
joeduffy 97c5f0a568 Take an initial stab at closure serialization
This change contains an initial implementation of closure serialization
built atop V8, rather than our own custom runtime.  This requires that
we use a Node.js dynamic C++ module, so that we can access the V8
APIs directly.  No build magic is required beyond node-gyp.

For the most part, this was straight forward, except for one part: we
have to use internal V8 APIs.  This is required for two reasons:

1) We need access to the function's lexical closure environment, so
   that we may look up closure variables.  Although there is a
   tantalizingly-close v8::Object::CreationContext, its implementation
   intentionally pokes through closure contexts in order to recover
   the Function constructor context instead.  That's not what we
   want.  We want the raw, unadulterated Function::context.

2) We need to control the lexical lookups of free variables so that
   they can look past chained contexts, lexical contexts, withs, and
   eval-style context extensions.  Simply runing a v8::Script, or
   simulating an eval, doesn't do the trick.  Hence, we need to access
   the unexported v8::internal::Context::Lookup function.

There is a third reason which is not yet implemented: free variable
calculation.  I could use Esprima, or do my own scanner for free
variables, but I'd prefer to simply use the V8 parser so that we're
using the same JavaScript parser across all components.  That too
is not part of the v8.h API, so we'll need to crack it open more.

To be clear, these are still exported public APIs, in proper headers
that are distributed with both Node and V8.  They simply aren't part
of the "stable" v8.h surface area.  As a result, I do expect that
maintaining this will be tricky, and I'd like to keep exploring how
to do this without needing the internal dependency.  For instance,
although this works with node-gyp just fine, we will probably be
brittle across versions of Node/V8, when the internal APIs might be
changing.  This will introduce unfortunate versioning headaches (all,
hopefully and thankfully, caught at compile-time).
2017-09-04 11:35:21 -07:00
joeduffy d8635fd4f3 Move modules to package root
The organization of packages underneath lib/ breaks the easy consumption
of submodules, a la

    import {FileAsset} from "@pulumi/pulumi-fabric/asset";

We will go back to having everything hanging off the module root directory.
2017-09-04 11:35:21 -07:00
joeduffy c84c43d6c5 Warn if the monitor is missing
This change stops throwing an error if the resource monitor hasn't been
configured, and instead emits a warning.  This will only go out a single
time, and can be suppressed by setting a config flag, but enables running
Pulumi programs directly via `node`, which can be useful for testing.
Of course, when this is done, allocating resource objects has no effect.
2017-09-04 11:35:21 -07:00
joeduffy 56c0392ba9 Add special serialization for some properties
This change rearanges serialization of properties in a few ways:

* Mirror the asset/archive serialization that we use in the fabric
  itself, so that we can recover the nature of these objects on
  both side of the RPC boundary.

* Wait for promises to settle before marshaling resource properties.
  This allows for I/O in creating a resource's state.  Note that
  we of course still do not block awaiting resolution of resource
  output properties during dry runs (planning), because they will
  never resolve.  This is distinctly different from promises.

* Add tests for the above.
2017-09-04 11:35:21 -07:00
joeduffy cac7d905a8 Don't permit undefined for all PropertyValue<T>s
The definition of PropertyValue<T> should not imply undefined as a legal
value, since this depends entirely on whether it is a required or optional
property.  The inner guts of the runtime logic that populates properties,
of course, needs to permit undefined, but this shouldn't leak into the
user model.  This change thus eliminates undefined from PropertyValue<T>'s
definition, and pushes it into the few places where undefined is actually legal.
2017-09-04 11:35:21 -07:00
joeduffy b827f1e95c Add config helpers
This change adds getX and requireX helper functions for configuration,
making it easy for packages to convert from Lumi's current weakly typed
config system, where everything is a string, into the internal JavaScript
representation, which is often a boolean, number, or complex array/object.
2017-09-04 11:35:21 -07:00
joeduffy 9f160a7f91 Configure providers at well-defined points
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.

In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.

As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package.  This fixes pulumi/pulumi#293.
2017-09-04 11:35:21 -07:00
joeduffy 70d0fac1c0 Simplify resource provider RPC interface
This change simplifies the provider RPC interface slightly:

1) Eliminate Get.  We really don't need it anymore.  There are
   several possibly-interesting scenarios down the road that may
   demand it, but when we get there, we can consider how best to
   bring this back.  Furthermore, the old-style Get remains mostly
   incompatible with Terraform anyway.

2) Pass URNs, not type tokens, across the RPC boundary.  This gives
   the provider access to more interesting information: the type,
   still, but also the name (which is no longer an object property).
2017-09-04 11:35:21 -07:00
joeduffy 7c848bfff4 Add config to the basic/minimal test 2017-09-04 11:35:21 -07:00
joeduffy 1df1b6d572 Get integration tests passing
This makes a few tweaks to get the integration tests passing:

* Add `runtime: nodejs` to the minimal example's `Lumi.yaml` file.

* Remove usage of `@lumi/lumirt { printf }` and just use `console.log`.

* Remove calls to `lumijs` in the integration test framework and
  the minimal example's package.json.  Instead, we just run
  `yarn run build`, which itself internally just invokes `tsc`.

* Add package validation logic and eliminate the pkg/compiler/metadata
  library, in favor of the simpler code in pkg/engine.

* Simplify the Node.js langhost plugin CLI, and simply take an
  argument rather than requiring required and optional --flags.

* Use a default path of "." if the program path isn't provided.  This
  is a legal scenario if you've passed a pwd and just want to load
  the package's default module ("./index.js" or whatever main says).

* Add an executable script, lumi-langhost-nodejs, that fires up the
  `bin/cmd/langhost/index.js` file to serve the Node.js language plugin.
2017-09-04 11:35:21 -07:00
joeduffy 9599ea2e55 Get planning engine unit tests running again
We now build and run cleanly locally (for unit tests).  The
integration tests are still on the floor at the moment.
2017-09-04 11:35:21 -07:00
joeduffy f189c40f35 Wire up Lumi to the new runtime strategy
🔥 🔥 🔥  🔥 🔥 🔥

Getting closer on #311.
2017-09-04 11:35:21 -07:00
joeduffy dc3bf4bffb Regenerate Protobufs 2017-09-04 11:35:20 -07:00
joeduffy 9ffbb8d755 Eliminate lumi, lumijs, and lumirt packages
This change gets rid of the old-style @pulumi/lumi, @pulumi/lumijs,
and @pulumi/lumirt packages.  Instead, we have the new Node.js SDK.
2017-09-04 11:35:20 -07:00
joeduffy c6c74976ec Encapsulate Property creation
This changes Resource's constructor slightly, to take a map of
PropertyValues, rather than Properties.  This simplifies the interface,
lets us hide the creation of Properties (meaning we can also hide the
resolution capabilities entirely), and also avoids mistakes like
accidentally passing values and/or other resource properties directly.
2017-09-04 11:35:20 -07:00
joeduffy 2957314c18 Fix test case typos 2017-09-04 11:35:20 -07:00
joeduffy 2657035e5e Add the notion of "dry runs" (plans)
This change introduces the notion of a "dry run" into the property
serialization logic, since this controls whether we wait for dependent
linked property values to arrive or not.  It also changes the test
harness to run all tests both ways: once in planning mode (when properties
will show up as "unknown" and the second time in deployment mode (when
properties will have settled to their final values).
2017-09-04 11:35:20 -07:00
joeduffy 183fb328e4 Add a test case for linked properties 2017-09-04 11:35:20 -07:00
joeduffy 42ec6bcaf4 Add a 10x complex property test 2017-09-04 11:35:20 -07:00
joeduffy 695b1ba141 Test input/output properties
This adds a test case for the simple input/output property cases.

In particular, it neither covers "linked" properties resulting from
dataflow nor promise properties resulting from I/O operations.  But
it does test many basic JSON input and output cases.

Also fixes a few things:

* Property's `resolver` property must be set to undefined to prevent
  multiple resolutions.  (This is still in flux and I'm sure will
  change shape before being settled.)

* Use `this.link`, not `this.linked`, to tell if a property is linked.

* Push all property initialization down into the
  `runtime.registerResource` routine.  In practice, the old pattern
  didn't really work, since `this` is inaccessible prior to `super(..)`.

* Eliminate our custom marshaling and unmarshaling routines in favor
  of the nifty built-in gRPC ones.
2017-09-04 11:35:20 -07:00
joeduffy 4581469d80 Test more resource creation
This adds some additional test coverage for creation of resources.
2017-09-04 11:35:20 -07:00
joeduffy 200fecbbaa Implement initial Lumi-as-a-library
This is the initial step towards redefining Lumi as a library that runs
atop vanilla Node.js/V8, rather than as its own runtime.

This change is woefully incomplete but this includes some of the more
stable pieces of my current work-in-progress.

The new structure is that within the sdk/ directory we will have a client
library per language.  This client library contains the object model for
Lumi (resources, properties, assets, config, etc), in addition to the
"language runtime host" components required to interoperate with the
Lumi resource monitor.  This resource monitor is effectively what we call
"Lumi" today, in that it's the thing orchestrating plans and deployments.

Inside the sdk/ directory, you will find nodejs/, the Node.js client
library, alongside proto/, the definitions for RPC interop between the
different pieces of the system.  This includes existing RPC definitions
for resource providers, etc., in addition to the new ones for hosting
different language runtimes from within Lumi.

These new interfaces are surprisingly simple.  There is effectively a
bidirectional RPC channel between the Lumi resource monitor, represented
by the lumirpc.ResourceMonitor interface, and each language runtime,
represented by the lumirpc.LanguageRuntime interface.

The overall orchestration goes as follows:

1) Lumi decides it needs to run a program written in language X, so
   it dynamically loads the language runtime plugin for language X.

2) Lumi passes that runtime a loopback address to its ResourceMonitor
   service, while language X will publish a connection back to its
   LanguageRuntime service, which Lumi will talk to.

3) Lumi then invokes LanguageRuntime.Run, passing information like
   the desired working directory, program name, arguments, and optional
   configuration variables to make available to the program.

4) The language X runtime receives this, unpacks it and sets up the
   necessary context, and then invokes the program.  The program then
   calls into Lumi object model abstractions that internally communicate
   back to Lumi using the ResourceMonitor interface.

5) The key here is ResourceMonitor.NewResource, which Lumi uses to
   serialize state about newly allocated resources.  Lumi receives these
   and registers them as part of the plan, doing the usual diffing, etc.,
   to decide how to proceed.  This interface is perhaps one of the
   most subtle parts of the new design, as it necessitates the use of
   promises internally to allow parallel evaluation of the resource plan,
   letting dataflow determine the available concurrency.

6) The program exits, and Lumi continues on its merry way.  If the program
   fails, the RunResponse will include information about the failure.

Due to (5), all properties on resources are now instances of a new
Property<T> type.  A Property<T> is just a thin wrapper over a T, but it
encodes the special properties of Lumi resource properties.  Namely, it
is possible to create one out of a T, other Property<T>, Promise<T>, or
to freshly allocate one.  In all cases, the Property<T> does not "settle"
until its final state is known.  This cannot occur before the deployment
actually completes, and so in general it's not safe to depend on concrete
resolutions of values (unlike ordinary Promise<T>s which are usually
expected to resolve).  As a result, all derived computations are meant to
use the `then` function (as in `someValue.then(v => v+x)`).

Although this change includes tests that may be run in isolation to test
the various RPC interactions, we are nowhere near finished.  The remaining
work primarily boils down to three things:

    1) Wiring all of this up to the Lumi code.

    2) Fixing the handful of known loose ends required to make this work,
       primarily around the serialization of properties (waiting on
       unresolved ones, serializing assets properly, etc).

    3) Implementing lambda closure serialization as a native extension.

This ongoing work is part of pulumi/pulumi-fabric#311.
2017-09-04 11:35:20 -07:00
joeduffy 5fb014e53c Explicitly track default properties
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly.  This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs.  This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.

This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
2017-07-31 18:26:15 -07:00
joeduffy 00442b73b4 Alter the way unknown properties are serialized
This change serializes unknown properties anywhere in the entire
property structure, including deeply embedded inside object maps, etc.

This is now done in such a way that we can recover both the computed
nature of the serialized property, along with its expected eventual
type, on the other side of the RPC boundary.

This will let us have perfect fidelity with the new bridge's view on
computed properties, rather than special casing them on "one side".
2017-07-21 14:00:30 -07:00
joeduffy 4e02105355 Pass old state to the provider's API 2017-07-21 14:00:30 -07:00
joeduffy ae92e68902 Return state as part of Create and Update¬
As part of the bridge bringup, I've discoverd that the property state
returned from Creates does *not* always equal the state that is then
read from calls to Get.  (I suspect this is a bug and that they should
be equivalent, but I doubt it's fruitfal to try and track down all
occurrences of this; I bet it's widespread).  To cope with this, we will
return state from Create and Update, instead of issuing a call to Get.
This was a design we considered to start with and frankly didn't have
a super strong reason to do it the current way, other than that it seemed
elegant to place all of the Get logic in one place.

Note that providers may choose to return nil, in which case we will read
state from the provider in the usual Get style.
2017-07-21 14:00:29 -07:00
joeduffy 06ad983541 Add a ReadLocations engine-side RPC function
This adds a ReadLocations RPC function to the engine interface, alongside
the singular ReadLocation.  The plural function takes a single token that
represents a module or class and we will then return all of the module
or class (static) properties that are currently known.
2017-07-01 13:26:49 -07:00
joeduffy 2daea4c3d8 Clarify aspects of using the DCO 2017-06-26 14:46:34 -07:00
joeduffy 5362536396 Remove some obsolete names 2017-06-24 11:55:16 -07:00
joeduffy 3c1041af49 Update license headers 2017-06-23 14:53:41 -07:00
joeduffy d7093188f0 Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.

The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible.  We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-20 19:45:07 -07:00
joeduffy d044720045 Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.

The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.

The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine.  It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.

There are two other sources, however.  First, a nullSource, which simply
refuses to create new objects.  This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment.  Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.

Boatloads of code is now changed and updated in the various CLI commands.

This further chugs along towards pulumi/lumi#90.  The end is in sight.
2017-06-13 07:10:13 -07:00
joeduffy 0a72d5360a Modify provider creates; use get for outs
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so.  As a result, this change takes an initial
whack at implementing Get on all existing AWS resources.  The Get API needs
to return a fully populated structure containing all inputs and outputs.

Believe it or not, this is actually part of pulumi/lumi#90.

This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.

This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties.  For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
2017-06-01 08:36:43 -07:00
joeduffy d79c41f620 Initial support for output properties (1 of 3)
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.

In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways.  Namely,
operations against Latent<T>s generally produce new Latent<U>s.

During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values.  An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange.  As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values.  My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.

For now, using an unresolved Latent<T> in a conditional will lead
to an error.  See pulumi/lumi#67.  Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support.  That is a missing 1/3.

Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values.  Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
2017-06-01 08:32:12 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy 47ef3f673b Rename PreviewUpdate (again)
Unfortunately, this wasn't a great name.  The old one stunk, but the
new one was misleading at best.  The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them.  InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.
2017-04-27 11:18:49 -07:00
joeduffy d6abea728c Add outputs to the Create provider's return
In order to support output properties (pulumi/coconut#90), we need to
modify the Create gRPC interface for resource providers slightly.  In
addition to returning the ID, we need to also return any properties
computed by the AWS provider itself.  For instance, this includes ARNs
and IDs of various kinds.  This change simply propagates the resources
but we don't actually support reading the outputs just yet.
2017-04-21 14:15:06 -07:00
joeduffy 0b6e262b46 Rename resource provider methods
This change renames two provider methods:

    * Read becomes Get.

    * UpdateImpact becomes PreviewUpdate.

These just read a whole lot nicer than the old names.
2017-04-20 14:09:00 -07:00
joeduffy bf8bf976de Add a cocogo SDK package
This change adds a rudimentary cocogo SDK package.  The only thing in
here is a cocogo.Resource type which will serve as the base marker for
all resource classes in IDL packages (see pulumi/coconut#133).
2017-04-15 07:47:10 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 705880cb7f Add the ability to specify analyzers
This change adds the ability to specify analyzers in two ways:

1) By listing them in the project file, for example:

        analyzers:
            - acmecorp/security
            - acmecorp/gitflow

2) By explicitly listing them on the CLI, as a "one off":

        $ coco deploy <env> \
            --analyzer=acmecorp/security \
            --analyzer=acmecorp/gitflow

This closes out pulumi/coconut#119.
2017-03-11 10:07:34 -08:00
joeduffy 45064d6299 Add basic analyzer support
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119.  In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource.  The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules).  These run simultaneous to overall checking.

Analyzers are loaded as plugins just like providers are.  The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.

This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
2017-03-10 23:49:17 -08:00
joeduffy 6194a59798 Add a pre-pass to validate resources before creating/updating
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties.  This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late).  My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.

This resolves pulumi/coconut#107, although we aren't using this
in any resource providers just yet.  I'll add a work item now for that...
2017-03-02 18:15:38 -08:00
joeduffy 523c669a03 Track which updates triggered a replacement
This change tracks which updates triggered a replacement.  This enables
better output and diagnostics.  For example, we now colorize those
properties differently in the output.  This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
2017-03-02 15:24:39 -08:00
joeduffy bd613a33e6 Make replacement first class
This change, part of pulumi/coconut#105, rearranges support for
resource replacement.  The old model didn't properly account for
the cascading updates and possible replacement of dependencies.

Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion.  This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step.  The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps).  It also makes it
easier to fold all of the edges into a single linchpin node.

As part of this, the update step no longer gets to choose whether
to recreate the resource.  Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
2017-03-02 09:52:08 -08:00
joeduffy fe0bb4a265 Support replacement IDs
This change introduces a new RPC function to the provider interface;
in pseudo-code:

    UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
        (bool, PropertyMap, error)

Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update.  This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).

The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect).  This ensures the overall DAG / schedule is ordered correctly.

This change is most of pulumi/coconut#105.  The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
2017-03-01 09:08:53 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy c120f62964 Redo object monikers
This change overhauls the way we do object monikers.  The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions.  The new approach mixes some amount of "automatic scoping"
plus some "explicit naming."  Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.

Each moniker has four parts:

    <Namespace>::<AllocModule>::<Type>::<Name>

wherein each element is the following:

    <Namespace>     The namespace being deployed into
    <AllocModule>   The module in which the object was allocated
    <Type>          The type of the resource
    <Name>          The assigned name of the resource

The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.

The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created.  In the future, we may wish to
extend this to also track the module under evaluation.  (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)

The <Name> warrants more discussion.  The resource provider is consulted
via a new gRPC method, Name, that fetches the name.  How the provider
does this is entirely up to it.  For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).

This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
2017-02-24 14:50:02 -08:00
joeduffy 09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy d9ee2429da Begin resource modeling and planning
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.

It contains the following key abstractions:

* resource.Provider is a wrapper around the CRUD operations exposed by
  underlying resource plugins.  It will eventually defer to resource.Plugin,
  which itself defers -- over an RPC interface -- to the actual plugin, one
  per package exposing resources.  The provider will also understand how to
  load, cache, and overall manage the lifetime of each plugin.

* resource.Resource is the actual resource object.  This is created from
  the overall evaluation object graph, but is simplified.  It contains only
  serializable properties, for example.  Inter-resource references are
  translated into serializable monikers as part of creating the resource.

* resource.Moniker is a serializable string that uniquely identifies
  a resource in the Mu system.  This is in contrast to resource IDs, which
  are generated by resource providers and generally opaque to the Mu
  system.  See marapongo/mu#69 for more information about monikers and some
  of their challenges (namely, designing a stable algorithm).

* resource.Snapshot is a "snapshot" taken from a graph of resources.  This
  is a transitive closure of state representing one possible configuration
  of a given environment.  This is what plans are created from.  Eventually,
  two snapshots will be diffable, in order to perform incremental updates.
  One way of thinking about this is that a snapshot of the old world's state
  is advanced, one step at a time, until it reaches a desired snapshot of
  the new world's state.

* resource.Plan is a plan for carrying out desired CRUD operations on a target
  environment.  Each plan consists of zero-to-many Steps, each of which has
  a CRUD operation type, a resource target, and a next step.  This is an
  enumerator because it is possible the plan will evolve -- and introduce new
  steps -- as it is carried out (hence, the Next() method).  At the moment, this
  is linearized; eventually, we want to make this more "graph-like" so that we
  can exploit available parallelism within the dependencies.

There are tons of TODOs remaining.  However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.

This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 12:31:48 -08:00
joeduffy 53a568da87 Modify the ResourceProvider.Update API
This changes two aspects of the ResourceProvider.Update RPC API:

1. Update needs to return an ID, in case the resource had to be
   recreated in response to the request.

2. Include both the old and the new values for properties that are
   being updated.
2017-02-11 13:14:29 -08:00
joeduffy 11b7880547 Further reshuffle Protobufs; generate JavaScript code
After a bit more thinking, we will create new SDK packages for each
of the languages we wish to support writing resource providers in.
This is where the RPC goo will live, so I have created a new sdk/
directory, moved the Protobuf/gRPC definitions underneath sdk/proto/,
and put the generated code into sdk/go/ and sdk/js/.
2017-02-10 09:28:46 -08:00
joeduffy bf33605195 Rearrange the library code
This rearranges the library code:

* sdk/... goes away.

* What used to be sdk/javascript/ is now lib/mu/, an actual MuPackage
  that provides the base abstractions for all other MuPackages to use.

* lib/aws is the @mu/aws MuPackage that exposes all AWS resources.

* lib/mux is the @mu/x MuPackage that provides cross-cloud abstractions.
  A lot of what used to be in lib/mu goes here.  In particular, autoscaler,
  func, ..., all the "general purpose" abstractions, really.
2017-01-20 10:30:43 -08:00
joeduffy 729af81e44 Move all cloud switching to mu/x MuPackage
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level.  In fact, the whole code-generation
phase of the compiler was based on it.

In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.

Therefore, most of this stuff can go.  I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision.  For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.

I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc).  But for the time being, it's a distraction
getting the core model running.  I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers.  (Although we won't be using
CloudFormation, some of the name generation code might be useful.)  So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
2017-01-20 09:46:59 -08:00
joeduffy e731229901 Make the Mu library a Node package; get it compiling
This change morphs the Mu library into a Node package and gets it
compiling against the latest AWS library and SDK changes.
2016-12-12 17:56:13 -08:00
joeduffy 6160d1a4c8 Export all submodules 2016-12-12 16:28:39 -08:00
joeduffy 25c8211703 Add a JSON-like suite of types 2016-12-12 16:26:56 -08:00
joeduffy 35694229ca Add a minimal JavaScript (TypeScript) SDK
This change introduces a basic JavaScript SDK (actually in TypeScript,
but consumable either way).  This is just scaffolding but provides the
minimal set of abstractions necessary to start writing real stacks.
2016-12-12 16:07:39 -08:00