Commit graph

8 commits

Author SHA1 Message Date
joeduffy
67e5750742 Fix a bunch of Linux issues
There's a fair bit of clean up in here, but the meat is:

* Allocate the language runtime gRPC client connection on the
  goroutine that will use it; this eliminates race conditions.

* The biggie: there *appears* to be a bug in gRPC's implementation
  on Linux, where it doesn't implement WaitForReady properly.  The
  behavior I'm observing is that RPC calls will not retry as they
  are supposed to, but will instead spuriously fail during the RPC
  startup.  To work around this, I've added manual retry logic in
  the shared plugin creation function so that we won't even try
  to use the client connection until it is in a well-known state.
  pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
  ideally, removing the work around.

The other minor things are:

* Separate run.js into its own module, so it doesn't include
  index.js and do a bunch of random stuff it shouldn't be doing.

* Allow run.js to be invoked without a --monitor.  This makes
  testing just the run part of invocation easier (including
  config, which turned out to be super useful as I was debugging).

* Tidy up some messages.
2017-09-08 15:11:09 -07:00
joeduffy
b23338d4d1 Disconnect from the host/engine properly 2017-09-07 12:33:43 -07:00
joeduffy
dcefa4a9d4 Close gRPC client connections
This change closes the gRPC client connections, as they keep the
Node.js message loop alive on Linux (but, strangely, not Mac;
regardless, a good thing to do anyway...)
2017-09-07 08:32:36 -07:00
joeduffy
e3a6695399 Depend only on vendored protos 2017-09-05 11:52:33 -07:00
joeduffy
f718ab6501 Add a runtime.Log class
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways.  Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
2017-09-04 11:35:21 -07:00
joeduffy
d8635fd4f3 Move modules to package root
The organization of packages underneath lib/ breaks the easy consumption
of submodules, a la

    import {FileAsset} from "@pulumi/pulumi-fabric/asset";

We will go back to having everything hanging off the module root directory.
2017-09-04 11:35:21 -07:00
joeduffy
2657035e5e Add the notion of "dry runs" (plans)
This change introduces the notion of a "dry run" into the property
serialization logic, since this controls whether we wait for dependent
linked property values to arrive or not.  It also changes the test
harness to run all tests both ways: once in planning mode (when properties
will show up as "unknown" and the second time in deployment mode (when
properties will have settled to their final values).
2017-09-04 11:35:20 -07:00
joeduffy
200fecbbaa Implement initial Lumi-as-a-library
This is the initial step towards redefining Lumi as a library that runs
atop vanilla Node.js/V8, rather than as its own runtime.

This change is woefully incomplete but this includes some of the more
stable pieces of my current work-in-progress.

The new structure is that within the sdk/ directory we will have a client
library per language.  This client library contains the object model for
Lumi (resources, properties, assets, config, etc), in addition to the
"language runtime host" components required to interoperate with the
Lumi resource monitor.  This resource monitor is effectively what we call
"Lumi" today, in that it's the thing orchestrating plans and deployments.

Inside the sdk/ directory, you will find nodejs/, the Node.js client
library, alongside proto/, the definitions for RPC interop between the
different pieces of the system.  This includes existing RPC definitions
for resource providers, etc., in addition to the new ones for hosting
different language runtimes from within Lumi.

These new interfaces are surprisingly simple.  There is effectively a
bidirectional RPC channel between the Lumi resource monitor, represented
by the lumirpc.ResourceMonitor interface, and each language runtime,
represented by the lumirpc.LanguageRuntime interface.

The overall orchestration goes as follows:

1) Lumi decides it needs to run a program written in language X, so
   it dynamically loads the language runtime plugin for language X.

2) Lumi passes that runtime a loopback address to its ResourceMonitor
   service, while language X will publish a connection back to its
   LanguageRuntime service, which Lumi will talk to.

3) Lumi then invokes LanguageRuntime.Run, passing information like
   the desired working directory, program name, arguments, and optional
   configuration variables to make available to the program.

4) The language X runtime receives this, unpacks it and sets up the
   necessary context, and then invokes the program.  The program then
   calls into Lumi object model abstractions that internally communicate
   back to Lumi using the ResourceMonitor interface.

5) The key here is ResourceMonitor.NewResource, which Lumi uses to
   serialize state about newly allocated resources.  Lumi receives these
   and registers them as part of the plan, doing the usual diffing, etc.,
   to decide how to proceed.  This interface is perhaps one of the
   most subtle parts of the new design, as it necessitates the use of
   promises internally to allow parallel evaluation of the resource plan,
   letting dataflow determine the available concurrency.

6) The program exits, and Lumi continues on its merry way.  If the program
   fails, the RunResponse will include information about the failure.

Due to (5), all properties on resources are now instances of a new
Property<T> type.  A Property<T> is just a thin wrapper over a T, but it
encodes the special properties of Lumi resource properties.  Namely, it
is possible to create one out of a T, other Property<T>, Promise<T>, or
to freshly allocate one.  In all cases, the Property<T> does not "settle"
until its final state is known.  This cannot occur before the deployment
actually completes, and so in general it's not safe to depend on concrete
resolutions of values (unlike ordinary Promise<T>s which are usually
expected to resolve).  As a result, all derived computations are meant to
use the `then` function (as in `someValue.then(v => v+x)`).

Although this change includes tests that may be run in isolation to test
the various RPC interactions, we are nowhere near finished.  The remaining
work primarily boils down to three things:

    1) Wiring all of this up to the Lumi code.

    2) Fixing the handful of known loose ends required to make this work,
       primarily around the serialization of properties (waiting on
       unresolved ones, serializing assets properly, etc).

    3) Implementing lambda closure serialization as a native extension.

This ongoing work is part of pulumi/pulumi-fabric#311.
2017-09-04 11:35:20 -07:00