Commit graph

15 commits

Author SHA1 Message Date
Sean Gillespie 1d5526d292
Work around commonjs protoc bug (#2403)
* Work around commonjs protoc bug

When compiling with the commonjs target, the protoc compiler still emits
references to Closure Compiler-isms that whack global state onto the
global object. This is particularly bad for us since we expect to be
able to make backwards-compatible changes to our Protobuf definitions
without breaking things, and this bug makes it impossible to do so.

To remedy the bug, this commit hacks the output of protoc (again) with
sed in order to avoid ever touching the global object. Everything still
works fine because the commonjs target (correctly) exports the protobuf
message types via the module system - it's just not writing to global
anymore.

* Fix status.proto

* Don't hack status.proto
2019-01-29 17:07:47 -08:00
Sean Gillespie f284112b4e
Use nightly protoc gRPC plugin for node (#1948)
* Use nightly protoc gRPC plugin for Node

Newer versions of the Node gRPC plugin accept the 'minimum_node_version'
flag, which we can use to instruct protoc to not support Node versions
earlier than Node 6. This allows the compiler to use 'Buffer.from'
instead of the deprecated 'Buffer' constructor, which fixes a
deprecation warning on Node 10.

* Protobuf changes
2018-09-17 15:16:31 -07:00
joeduffy b0d91ad9a9 Make Protobuf/gRPC compilation repeatable
I tried to re-generate the Protobuf/gRPC files, but generate.sh seems to
have assumed I manually built the Dockerfile ahead-of-time (unless I'm
missing something -- very possible). Unfortunately, when I went to do
that, I ended up with a handful of errors, most of them due to
differences in versions. (We probably want to think about pinning them.)

To remedy both issues, I've fixed up the Dockerfile so that it works for
me at least, and added a build step to the front of generate.sh.
2018-08-04 10:27:36 -07:00
Alex Clemmer 4c184335ca Make generate.sh reproducible 2018-07-15 11:05:44 -10:00
Sean Gillespie 8b9e24cd85 Allow dynamic-provider to send structured errors
A critical part of the partial update protocol is to return a structured
error when a resource is successfully created, but fails to initialize.
This structured error contains the properties of the
partially-initialized resource, and instructs the engine to halt.

Most languages implement this by attaching "details" to the error, i.e.,
an arbitrary proto message attached to the error. The JavaScript
implementation is not mature enough to include all the facilities
required to use this, so here we must add a `Status` message, which
protobuf requires as part of its structure for returning details.
2018-07-02 13:32:23 -07:00
Alex Clemmer 2394743c60 Fix the lies in sdk/proto/generate.sh 2018-07-02 13:31:39 -07:00
Sean Gillespie c2b2f3b117
Initial Python 3 port of the Python SDK (#1563)
* Machine-assisted Python 3 port

* Hack around protoc python imports

* Regenerate gRPC and protobuf

* CR: Bash code cleanup
2018-06-26 11:14:03 -07:00
joeduffy 8b5874dab5 General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):

* The primary change is to change the way the engine's core update
  functionality works with respect to deploy.Source.  This is the
  way we can plug in new sources of resource information during
  planning (and, soon, diffing).  The way I intend to model refresh
  is by having a new kind of source, deploy.RefreshSource, which
  will let us do virtually everything about an update/diff the same
  way with refreshes, which avoid otherwise duplicative effort.

  This includes changing the planOptions (nee deployOptions) to
  take a new SourceFunc callback, which is responsible for creating
  a source specific to the kind of plan being requested.

  Preview, Update, and Destroy now are primarily differentiated by
  the kind of deploy.Source that they return, rather than sprinkling
  things like `if Destroying` throughout.  This tidies up some logic
  and, more importantly, gives us precisely the refresh hook we need.

* Originally, we used the deploy.NullSource for Destroy operations.
  This simply returns nothing, which is how Destroy works.  For some
  reason, we were no longer doing this, and instead had some
  `if Destroying` cases sprinkled throughout the deploy.EvalSource.
  I think this is a vestige of some old way we did configuration, at
  least judging by a comment, which is apparently no longer relevant.

* Move diff and diff-printing logic within the engine into its own
  pkg/engine/diff.go file, to prepare for upcoming work.

* I keep noticing benign diffs anytime I regenerate protobufs.  I
  suspect this is because we're also on different versions.  I changed
  generate.sh to also dump the version into grpc_version.txt.  At
  least we can understand where the diffs are coming from, decide
  whether to take them (i.e., a newer version), and ensure that as
  a team we are monotonically increasing, and not going backwards.

* I also tidied up some tiny things I noticed while in there, like
  comments, incorrect types, lint suppressions, and so on.
2018-03-28 07:45:23 -07:00
joeduffy a045e2fb1e Implement more of the Python runtime
This change includes a lot more functionality.  Enough to actually
run the webserver-py example through previews, updates, and destroys!

* Actually wire up the gRPC connections to the engine/monitor.

* Move the Node.js and Python generated Protobuf/gRPC files underneath
  the actual SDK directories to simplify this generally.  No more
  copying during `make` and, in fact, this was required to give a smoother
  experience with good packages/modules for the Python's SDK development.

* Build the Python egg during `make build`.

* Add support for program stacks.  Just like with the Node.js runtime,
  we will auto-parent any resources without explicit parents to a single
  top-level resource component.

* Add support for component resource output properties.

* Add get_project() and get_stack() functions for retrieving the current
  project and stack names.

* Properly use UNKNOWN sentinels.

* Add a set_outputs() function on Resource.  This is defined by the
  code-generator and allows custom logic for output property setting.
  This is cleaner than the way we do this in Node.js, and gives us a
  way to ensure that output properties are "real" properties, complete
  with member documentation.  This also gives us a hook to perform
  name demangling, which the code-generator typically controls anyway.

* Add package dependencies to setuptools.py and requirements.txt.
2018-02-24 08:58:34 -08:00
joeduffy 8417955ddb Add Python generation to our Protobufs/gRPC interfaces
Part of pulumi/pulumi#754.
2017-12-21 09:24:48 -08:00
joeduffy 200fecbbaa Implement initial Lumi-as-a-library
This is the initial step towards redefining Lumi as a library that runs
atop vanilla Node.js/V8, rather than as its own runtime.

This change is woefully incomplete but this includes some of the more
stable pieces of my current work-in-progress.

The new structure is that within the sdk/ directory we will have a client
library per language.  This client library contains the object model for
Lumi (resources, properties, assets, config, etc), in addition to the
"language runtime host" components required to interoperate with the
Lumi resource monitor.  This resource monitor is effectively what we call
"Lumi" today, in that it's the thing orchestrating plans and deployments.

Inside the sdk/ directory, you will find nodejs/, the Node.js client
library, alongside proto/, the definitions for RPC interop between the
different pieces of the system.  This includes existing RPC definitions
for resource providers, etc., in addition to the new ones for hosting
different language runtimes from within Lumi.

These new interfaces are surprisingly simple.  There is effectively a
bidirectional RPC channel between the Lumi resource monitor, represented
by the lumirpc.ResourceMonitor interface, and each language runtime,
represented by the lumirpc.LanguageRuntime interface.

The overall orchestration goes as follows:

1) Lumi decides it needs to run a program written in language X, so
   it dynamically loads the language runtime plugin for language X.

2) Lumi passes that runtime a loopback address to its ResourceMonitor
   service, while language X will publish a connection back to its
   LanguageRuntime service, which Lumi will talk to.

3) Lumi then invokes LanguageRuntime.Run, passing information like
   the desired working directory, program name, arguments, and optional
   configuration variables to make available to the program.

4) The language X runtime receives this, unpacks it and sets up the
   necessary context, and then invokes the program.  The program then
   calls into Lumi object model abstractions that internally communicate
   back to Lumi using the ResourceMonitor interface.

5) The key here is ResourceMonitor.NewResource, which Lumi uses to
   serialize state about newly allocated resources.  Lumi receives these
   and registers them as part of the plan, doing the usual diffing, etc.,
   to decide how to proceed.  This interface is perhaps one of the
   most subtle parts of the new design, as it necessitates the use of
   promises internally to allow parallel evaluation of the resource plan,
   letting dataflow determine the available concurrency.

6) The program exits, and Lumi continues on its merry way.  If the program
   fails, the RunResponse will include information about the failure.

Due to (5), all properties on resources are now instances of a new
Property<T> type.  A Property<T> is just a thin wrapper over a T, but it
encodes the special properties of Lumi resource properties.  Namely, it
is possible to create one out of a T, other Property<T>, Promise<T>, or
to freshly allocate one.  In all cases, the Property<T> does not "settle"
until its final state is known.  This cannot occur before the deployment
actually completes, and so in general it's not safe to depend on concrete
resolutions of values (unlike ordinary Promise<T>s which are usually
expected to resolve).  As a result, all derived computations are meant to
use the `then` function (as in `someValue.then(v => v+x)`).

Although this change includes tests that may be run in isolation to test
the various RPC interactions, we are nowhere near finished.  The remaining
work primarily boils down to three things:

    1) Wiring all of this up to the Lumi code.

    2) Fixing the handful of known loose ends required to make this work,
       primarily around the serialization of properties (waiting on
       unresolved ones, serializing assets properly, etc).

    3) Implementing lambda closure serialization as a native extension.

This ongoing work is part of pulumi/pulumi-fabric#311.
2017-09-04 11:35:20 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy 09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy 11b7880547 Further reshuffle Protobufs; generate JavaScript code
After a bit more thinking, we will create new SDK packages for each
of the languages we wish to support writing resource providers in.
This is where the RPC goo will live, so I have created a new sdk/
directory, moved the Protobuf/gRPC definitions underneath sdk/proto/,
and put the generated code into sdk/go/ and sdk/js/.
2017-02-10 09:28:46 -08:00
Renamed from pkg/murpc/proto/generate.sh (Browse further)