Commit graph

687 commits

Author SHA1 Message Date
pat@pulumi.com 9453f86c2e Implement dynamic resources.
A dynamic resource is a resource whose provider is implemented alongside
the resource itself. This provider may close over and use orther
resources in the implementation of its CRUD operations. The provider
itself must be stateless, as each CRUD operation for a particular
dynamic resource type may use an independent instance of the provider.
Changes to the definition of a resource's provider result in replacement
of the resource itself (rather than a simple update), as this allows the
old provider definition to delete the old resource and the new provider
definition to create an appropriate replacement.
2017-10-16 23:06:53 -07:00
Pat Gavlin 546612a354 Drain std{out,err} when a plugin fails to load.
If a plugin fails to load after we've set up the goroutines that copy
from its std{out,err} streams, then those goroutines can end up writing
to a closed event channel. This change ensures that we properly drain
those streams in this case.
2017-10-16 21:38:11 -07:00
Matt Ellis 996ac8872d Merge pull request #420 from pulumi/rename-env-to-stack
Use `Stack` over `Environment` to describe a deployment target
2017-10-16 14:24:11 -07:00
pat@pulumi.com bdbb1b59d4 Ensure a plugin's std{out,err} streams are drained in Close().
Not doing so can cause panics, as the goroutines we use to copy these
streams can end up writing to a closed channel.
2017-10-16 13:44:37 -07:00
Matt Ellis 22c9e0471c Use Stack over Environment to describe a deployment target
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.

From a user's point of view, there are a few changes:

1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).

On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
2017-10-16 13:04:20 -07:00
joeduffy 301739c6b5 Add auto-parenting
This changes a few things about "components":

* Rename what was previously ExternalResource to CustomResource,
  and all of the related fields and parameters that this implies.
  This just seems like a much nicer and expected name for what
  these represent.  I realize I am stealing a name we had thought
  about using elsewhere, but this seems like an appropriate use.

* Introduce ComponentResource, to make initializing resources
  that merely aggregate other resources easier to do correctly.

* Add a withParent and parentScope concept to Resource, to make
  allocating children less error-prone.  Now there's no need to
  explicitly adopt children as they are allocated; instead, any
  children allocated as part of the withParent callback will
  auto-parent to the resource provided.  This is used by
  ComponentResource's initialization function to make initialization
  easier, including the distinction between inputs and outputs.
2017-10-15 04:38:26 -07:00
joeduffy fbfca58a3f Implement components
This change implements core support for "components" in the Pulumi
Fabric.  This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.

In a nutshell, resources no longer imply external providers.  It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.

For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.

To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.

All resources now have the ability to adopt children.  This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).

Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output.  For instance:

    + aws:serverless:Function: (create)
      [urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
      => urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
      => urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
      => urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda

The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 18:30:59 -07:00
Matt Ellis e7e4e75af3 Don't examine the Checkpoint in the CLI
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
2017-10-09 18:21:55 -07:00
joeduffy 7e30dde8f4 Update plan test to new interface 2017-10-04 08:30:50 -04:00
joeduffy b7576b9b14 Add a notion of stable properties
This change adds the capability for a resource provider to indicate
that, where an action carried out in response to a diff, a certain set
of properties would be "stable"; that is to say, they are guaranteed
not to change.  As a result, properties may be resolved to their final
values during previewing, avoiding erroneous cascading impacts.

This avoids the ever-annoying situation I keep running into when demoing:
when adding or removing an ingress rule to a security group, we ripple
the impact through the instance, and claim it must be replaced, because
that instance depends on the security group via its name.  Well, the name
is a great example of a stable property, in that it will never change, and
so this is truly unfortunate and always adds uncertainty into the demos.
Particularly since the actual update doesn't need to perform replacements.

This resolves pulumi/pulumi#330.
2017-10-04 08:22:21 -04:00
pat@pulumi.com 252fd8e6bb Remove deploy.Step.Pre.
As per @joeduffy, this is an artifact of the prior runtime model.
2017-10-03 10:03:05 -07:00
Pat Gavlin ff2a3fa242 Replace Plan.Apply with planResult.Walk. (#383)
`deploy.Plan.Apply` was only consumed by the engine, and seemed to be in
the wrong place given the API exported by the rest of `Plan` (i.e.
`Plan.Start` + `PlanIterator`). Furthermore, we were missing a reasonable
opportunity to share code between `update` and `preview`, both of which
need to walk the plan. These changes move the plan walk into `package engine`
as `planResult.Walk` and replace the `Progress` interface with a new interface,
`StepActions`, which subsumes the functionality of the former and adds support
for implementation-specific step execution. `planResult.Walk` is then
consumed by both `Engine.Deploy` and `Engine.PrintPlan`.
2017-10-02 14:26:51 -07:00
joeduffy ac2dbc80fa Add an Invoke RPC method on ResourceProvider
This change enables us to make progress on exposing data sources
(see pulumi/pulumi-terraform#29).  The idea is to have an Invoke
function that simply takes a function token and arguments, performs
the function lookup and invocation, and then returns a return value.
2017-09-30 14:53:27 -04:00
pat@pulumi.com 28d02e232b Fix #372.
Print "modified" rather than "modifyd". This introduces a new method,
`resource.StepOp.PastTense()`, which returns the past tense description
of the operation.
2017-09-28 09:28:36 -07:00
Joe Duffy b4646db39b Merge branch 'master' into RenameVerbs 2017-09-23 11:31:29 -07:00
joeduffy 141a112950 Improve output formatting
This change improves our output formatting by generally adding
fewer prefixes.  As shown in pulumi/pulumi#359, we were being
excessively verbose in many places, including prefixing every
console.out with "langhost[nodejs].stdout: ", displaying full
stack traces for simple errors like missing configuration, etc.

Overall, this change includes the following:

* Don't prefix stdout and stderr output from the program, other
  than the standard "info:" prefix.  I experimented with various
  schemes here, but they all felt gratuitous.  Simply emitting
  the output seems fine, especially as it's closer to what would
  happen if you just ran the program under node.

* Do NOT make writes to stderr fail the plan/deploy.  Previously
  we assumed that any console.errors, for instance, meant that
  the overall program should fail.  This simply isn't how stderr
  is treated generally and meant you couldn't use certain
  logging techniques and libraries, among other things.

* Do make sure that stderr writes in the program end up going to
  stderr in the Pulumi CLI output, however, so that redirection
  works as it should.  This required a new Infoerr log level.

* Make a small fix to the planning logic so we don't attempt to
  print the summary if an error occurs.

* Finally, add a new error type, RunError, that when thrown and
  uncaught does not result in a full stack trace being printed.
  Anyone can use this, however, we currently use it for config
  errors so that we can terminate with a pretty error message,
  rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 05:20:11 -07:00
pat@pulumi.com 69341fa7c8 push is dead; long live update.
After discussion with Joe and Luke, we've decided to use `update` instead
of `push` as it more intuitively fits the operation being performed.
2017-09-22 17:23:40 -07:00
pat@pulumi.com 597db186ec Renames: plan -> preview, deploy -> push.
Part of #353.

These changes also remove all command aliases from the `pulumi` command.
2017-09-22 15:28:03 -07:00
Joe Duffy f6e694c72b Rename pulumi-fabric to pulumi
This includes a few changes:

* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.

* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.

* The CLI is renamed from lumi to pulumi.
2017-09-21 19:18:21 -07:00
Matt Ellis 25ae463915 Listen only on 127.0.0.1
Instead of binding on 0.0.0.0 (which will listen on every interface)
let's only listen on localhost. On windows, this both makes the
connection Just Work and also prevents the Windows Firewall from
blocking the listen (and displaying UI saying it has blocked an
application and asking if the user should allow it)
2017-09-21 10:56:45 -07:00
joeduffy 22387d24cd Switch to a --parallel=P flag
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism.  We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized.  We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it.  And in any case, the --parallel=P capability will be useful.
2017-09-17 08:10:46 -07:00
joeduffy 087deb7643 Add optional dependsOn to Resource constructors
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources.  We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case.  Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.

This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106.  I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways.  This tends to be the way most tools (like Make) operate.

This fixes pulumi/pulumi-fabric#335.
2017-09-15 16:38:52 -07:00
joeduffy 016adec9f7 Allow unknowns in defaults
The "defaults" may include computed values, so we should pass
AllowUnknowns as true during unmarshaling of Check calls.
2017-09-15 09:56:42 -07:00
joeduffy 5b779798c4 Fix (don't torch) LUMIDL
This change finishes the conversion of LUMIDL over to the new
runtime model, with the appropriate code generation changes.

It turns out the old model actually had a flaw in it anyway that we
simply didn't hit because we hadn't been stressing output properties
nearly as much as the new model does.  This resulted in needing to
plumb the rejection (or allowance) of computed properties more
deeply into the resource property marshaling/unmarshaling logic.

As of these changes, I can run the GitHub provider again locally.

This change fixes pulumi/pulumi-fabric#332.
2017-09-14 16:40:44 -07:00
joeduffy 9c7f6b678c Bring LUMIDL up to code
This gets LUMIDL to generate code in the new way.
2017-09-11 16:58:25 -07:00
joeduffy 8aba3aae12 Upgrade gRPC to 1.6.0; use full addresses
This change upgrades gRPC to 1.6.0 to pick up a few bug fixes.

We also use the full address for gRPC endpoints, including the
interface name, as otherwise we pick the wrong interface on Linux.
2017-09-09 07:37:10 -07:00
joeduffy 67e5750742 Fix a bunch of Linux issues
There's a fair bit of clean up in here, but the meat is:

* Allocate the language runtime gRPC client connection on the
  goroutine that will use it; this eliminates race conditions.

* The biggie: there *appears* to be a bug in gRPC's implementation
  on Linux, where it doesn't implement WaitForReady properly.  The
  behavior I'm observing is that RPC calls will not retry as they
  are supposed to, but will instead spuriously fail during the RPC
  startup.  To work around this, I've added manual retry logic in
  the shared plugin creation function so that we won't even try
  to use the client connection until it is in a well-known state.
  pulumi/pulumi-fabric#337 tracks getting to the bottom of this and,
  ideally, removing the work around.

The other minor things are:

* Separate run.js into its own module, so it doesn't include
  index.js and do a bunch of random stuff it shouldn't be doing.

* Allow run.js to be invoked without a --monitor.  This makes
  testing just the run part of invocation easier (including
  config, which turned out to be super useful as I was debugging).

* Tidy up some messages.
2017-09-08 15:11:09 -07:00
joeduffy ebc3bf1dd0 Reap the language host process 2017-09-07 15:03:48 -07:00
joeduffy f2d53459eb Add the notion of stable states
If a resource's planning operation is to do nothing, we can safely
assume that all of its properties are stable.  This can be used during
planning to avoid cascading updates that we know will never happen.
2017-09-05 10:01:00 -07:00
joeduffy f3cf73d790 Change plugin prefixes to "pulumi-" 2017-09-04 11:35:21 -07:00
joeduffy fe66a0eba7 Use the new URN during creates 2017-09-04 11:35:21 -07:00
joeduffy 77bbf443bc Synchronize with the resource channel properly 2017-09-04 11:35:21 -07:00
joeduffy f718ab6501 Add a runtime.Log class
This change adds the ability to perform runtime logging, including
debug logging, that wires up to the Pulumi Fabric engine in the usual
ways.  Most stdout/stderr will automatically go to the right place,
but this lets us add some debug tracing in the implementation of the
runtime itself (and should come in handy in other places, like perhaps
the Pulumi Framework and even low-level end-user code).
2017-09-04 11:35:21 -07:00
joeduffy a13a83b067 Pass the monitor address correctly to language plugins 2017-09-04 11:35:21 -07:00
joeduffy 9f160a7f91 Configure providers at well-defined points
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.

In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.

As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package.  This fixes pulumi/pulumi#293.
2017-09-04 11:35:21 -07:00
joeduffy 70d0fac1c0 Simplify resource provider RPC interface
This change simplifies the provider RPC interface slightly:

1) Eliminate Get.  We really don't need it anymore.  There are
   several possibly-interesting scenarios down the road that may
   demand it, but when we get there, we can consider how best to
   bring this back.  Furthermore, the old-style Get remains mostly
   incompatible with Terraform anyway.

2) Pass URNs, not type tokens, across the RPC boundary.  This gives
   the provider access to more interesting information: the type,
   still, but also the name (which is no longer an object property).
2017-09-04 11:35:21 -07:00
joeduffy 590e9e539b Rename Lumi.yaml to Pulumi.yaml
And also eliminate lots of accumulated cruft around "packfiles", etc.
in the workspace code.
2017-09-04 11:35:21 -07:00
joeduffy 9599ea2e55 Get planning engine unit tests running again
We now build and run cleanly locally (for unit tests).  The
integration tests are still on the floor at the moment.
2017-09-04 11:35:21 -07:00
joeduffy f189c40f35 Wire up Lumi to the new runtime strategy
🔥 🔥 🔥  🔥 🔥 🔥

Getting closer on #311.
2017-09-04 11:35:21 -07:00
Luke Hoban 24393cfe7b Read path assets into memory instead of holding open file handles
This helps avoid running into file handle limits when creating archives including thousands of node_modules files.

Tracking a more complete fix through all other codepaths related to assets as part of #325.
2017-08-29 13:33:02 -07:00
joeduffy 627d97d83f Close open Blobs
This change ensures we close all Blobs in the asset/archive logic.
In particular, the archive.Read function returns a map of files to
Blobs and after we are done copying the contents we must ensure
that we invoke Close, otherwise we may leak file handles, sockets,
and so on.  This may or may not be the culprit to the "too many
files open" errors we are hitting while deploying the M5 bits.
2017-08-28 18:52:51 -07:00
joeduffy a626dcf6a3 Prettify the CLI in a few places
This changes a few things in the CLI, mostly just prettying it up:

    * Label all steps more clearly with the kind of step.  Also
      unify the way we present this during planning and deployment.

    * Summarize the changes that *did not* get made just as clearly
      as those that did.  In other words, stuff like this:

        info: 2 resources changed:
            +1 resource created
            -1 resource deleted
            5 resources unchanged

      and

        info: no resources required
            5 resources unchanged

    * Always print output properties when they are pertinent.
      This includes creates, replacements, and updates.

    * Show replacement creates and deletes very distinctly.  The
      create parts show up minty green and the delete parts show up
      rosey red.  These are the "physical" steps, compared to the
      "logical" step of replacement (which remains marigold).

      I still don't love where we are here.  The asymmetry between
      planning and deployment bugs me, and could be surprising.
      ("Hey, my deploy doesn't look like my plan!")  I don't know
      what developers will want to see here and I feel like in
      general we are spewing far too much into the CLI to make it
      even useful for anything but diagnosing failures afterwards.

      I propose that we should do a deep dive on this during the
      CLI epic, pulumi/pulumi-service#2.

This resolves pulumi/pulumi-fabric#305.
2017-08-06 10:05:51 -07:00
joeduffy ff0eb81944 Export urnName constants
This avoids us needing to hard-code the urnName property in
various tools, in case we ever need to change it again down the road.
2017-08-05 08:32:50 -07:00
joeduffy 007c2f216d Rename name property to urnName
At the moment, we permit resources to carry a name, which the
engine uses as part of URN creation.  Unfortunately, the property
"name" has a very high chance of conflicting with meaningful
user-authored properties.  And furthermore this sort of name,
although key to creating URNs which are core to how the overall
system performs deployments and manages resources, are seldom
used programmatically in Lumi programs.  As a result, it's a real
nuisance that we stole the good name.

This change renames that property from "name" to "urnName".  This
not only has a lower likelihood of conflicting, but it also looks
reasonable sitting alongside the "urn" property.  In fact, in some
future universe, after some of the upcoming runtime changes, we
may not even need the name property on the objects whatsoever.

I had originally toyed with the idea of eliminating the Resource
versus NamedResource distinction.  This is certainly simpler and
remains a possibility.  I didn't do that right now, however, because
the flexibility of letting resource providers name resources however
they see fit still seems possibly useful.  For example, we keep
talking about whether functions can be auto-named based on hashes.
Until we've run those conversations to ground, I'd hate to do some
work that just needs to be undone in order to enable a scenario that
has a non-trivial likelihood of us wanting to explore.
2017-08-05 08:20:57 -07:00
joeduffy c0e4bdb03b This shouldn't be here!
😬
2017-08-03 18:09:14 -07:00
joeduffy a160741931 Tolerate computed and output properties
We are now freely flowing computed and output properties across the
RPC boundary with providers.  As such, we need to tolerate them in
a few more places.  Namely, mapping to and from regular non-resource
property values, and also when copying RPC resource state back onto
live runtime objects.
2017-08-03 11:01:38 -07:00
joeduffy 35aa6b7559 Rename pulumi/lumi to pulumi/pulumi-fabric
We are renaming Lumi to Pulumi Fabric.  This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
2017-08-02 09:25:22 -07:00
joeduffy 5fb014e53c Explicitly track default properties
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly.  This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs.  This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.

This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
2017-07-31 18:26:15 -07:00
joeduffy 300af62747 Fix erroneous assert when unmarshaling computed properties 2017-07-31 11:44:34 -07:00
joeduffy 00442b73b4 Alter the way unknown properties are serialized
This change serializes unknown properties anywhere in the entire
property structure, including deeply embedded inside object maps, etc.

This is now done in such a way that we can recover both the computed
nature of the serialized property, along with its expected eventual
type, on the other side of the RPC boundary.

This will let us have perfect fidelity with the new bridge's view on
computed properties, rather than special casing them on "one side".
2017-07-21 14:00:30 -07:00
joeduffy b8be7fa5e6 Add a handy Copy function to PropertyMap 2017-07-21 14:00:30 -07:00
Luke Hoban 9243bd5f3f Fix tests after previous commit 2017-07-21 14:00:30 -07:00
joeduffy 40747a94bf Merge inputs with outputs
For Update and Delete operations, we provided just the input state
for a resource.  This is insufficient, because the provider may need
to depend on output state from the Create or prior Update operations.
This change merges the output atop the input during the step application.
2017-07-21 14:00:30 -07:00
joeduffy 4e02105355 Pass old state to the provider's API 2017-07-21 14:00:30 -07:00
joeduffy ae92e68902 Return state as part of Create and Update¬
As part of the bridge bringup, I've discoverd that the property state
returned from Creates does *not* always equal the state that is then
read from calls to Get.  (I suspect this is a bug and that they should
be equivalent, but I doubt it's fruitfal to try and track down all
occurrences of this; I bet it's widespread).  To cope with this, we will
return state from Create and Update, instead of issuing a call to Get.
This was a design we considered to start with and frankly didn't have
a super strong reason to do it the current way, other than that it seemed
elegant to place all of the Get logic in one place.

Note that providers may choose to return nil, in which case we will read
state from the provider in the usual Get style.
2017-07-21 14:00:29 -07:00
joeduffy ceb32b9e2b Add a sig field to Asset/Archive
This change mirrors the dynamic marshaling structure on the static
definition of the Asset and Archive types.  This ensures that they
marshal correctly even when deeply embedded inside other structures.
2017-07-18 09:30:31 -07:00
joeduffy 91c90cf2a9 Add diffing logic for assets/archives 2017-07-17 12:11:15 -07:00
joeduffy 002618e605 Add some more asset serialization round-tripping tests 2017-07-17 11:30:10 -07:00
joeduffy 4d708c8567 Fix asset diffing
This change brings the same typed serialization we use for RPC
to the serialization of deployments.  This ensures that we get
repeatable diffs from one deployment to the next.
2017-07-17 10:38:57 -07:00
joeduffy bda607abd8 Permit -1 for randlen and maxlen
This allows -1 for randlen and maxlen to use defaults.  The default
behavior is that randlen uses sha1.Size and maxlen is "no max".
2017-07-15 09:59:44 -07:00
joeduffy c61dcb5206 Revert "Rename Lumi resource properties"
This reverts commit c3db70849d.

I've opted to take a new strategy to ensure the bridge properties
don't conflict (with manual renames), similar to the name property.
2017-07-15 09:33:23 -07:00
joeduffy 0f18bb40b2 Tolerate some nils in important places 2017-07-14 18:08:33 -07:00
joeduffy a6caef973a Make assets and archives 1st class
This change recognizes assets and archives as 1st class resource
property values.  This is necessary to support them in the new bridge
work, and lays the foundation for fixing pulumi/lumi#153.

I also took the opportunity to clean up some old cruft in the
resource properties area.
2017-07-14 12:28:43 -07:00
joeduffy c3db70849d Rename Lumi resource properties
This renames the basemost resource properties, id and urn, to
names that are less likely to conflict with properties that real
resources will want to use, pid and upn (provider ID and Universal
Pulumi Name, respectively).

I actually ran into this with the current bridge work.  An alternative
solution would be to require derived resources to pick different names,
however this is unfortunate because usually they are more "user-facing"
than ours.  Another alternative is to not hijack the object properties
at all, but that too is problematic because we use these properties
during the evaluation of plans and deployments.

This seems like a reasonable middle ground.
2017-07-14 08:55:07 -07:00
joeduffy a70790b81a Add key replacement functions
This continues to add some handy replacement routines for property
maps, so we can transform keys and values to and fro.
2017-07-12 13:03:18 -07:00
joeduffy ea0461dadb Export plugin prefix constants 2017-07-06 09:46:00 -04:00
joeduffy aeefc27a08 Add some ReadLocations tracing; and skip nulls 2017-07-06 00:02:33 -04:00
joeduffy 06ad983541 Add a ReadLocations engine-side RPC function
This adds a ReadLocations RPC function to the engine interface, alongside
the singular ReadLocation.  The plural function takes a single token that
represents a module or class and we will then return all of the module
or class (static) properties that are currently known.
2017-07-01 13:26:49 -07:00
joeduffy 5d9f7918e9 Add a PropertyMap/Value.MapReplace function
This adds a handy MapReplace function on pkg/resource's PropertyMap and
PropertyValue types.  This is just like the existing Mappable function,
except that it permits easy replacement of elements as the map transformation
occurs.  We need this to perform float64=>int transformations.
2017-07-01 12:08:55 -07:00
joeduffy 27b819dec6 Add the ability to skip nulls when un/marshaling properties 2017-07-01 11:51:52 -07:00
joeduffy 15a75c9ee4 Catch duplicate URNs during planning
We fail very late in the process of plan application, should a duplicate
URN arise.  This change fails as early in the process as possible and
ensures that it does so with good line number information.
2017-06-27 13:04:06 -07:00
joeduffy 24fd8e8f4a Add cancellation to interpreter
This properly unwinds the interpreter should something happen that
results in cancellation.  This occurs, for example, when the planning
engine encounters an error and decides that it doesn't need to proceed
further with evaluation before it simply goes ahead and exits.
2017-06-27 11:31:17 -07:00
joeduffy 19a359e65d Specialize check failure messages 2017-06-27 10:56:33 -07:00
joeduffy 0ea55328f6 Specialize the null message for config reads 2017-06-27 10:45:29 -07:00
joeduffy 2daea4c3d8 Clarify aspects of using the DCO 2017-06-26 14:46:34 -07:00
joeduffy 3c1041af49 Update license headers 2017-06-23 14:53:41 -07:00
joeduffy d05e7ace91 Ensure we close the plugin host/context
This adds a few missing closes for the plugin host/context.  This
should fix pulumi/lumi#261.  Eventually when we have more robust
nightly test options, and want to spend the time, we should think
about doing more rigorous stress testing that kills processes at
inopportune times and guarantees we don't leak.  I've filed
pulumi/lumi#263 to do that.
2017-06-22 15:18:29 -07:00
joeduffy 8b57310854 Tidy up more lint
This change fixes a few things:

* Most importantly, we need to place a leading "." in the paths
  to Gometalinter, otherwise some sub-linters just silently skip
  the directory altogether.  errcheck is one such linter, which
  is a very important one!

* Use an explicit Gometalinter.json file to configure the various
  settings.  This flips on a few additional linters that aren't
  on by default (line line length checking).  Sadly, a few that
  I'd like to enable take waaaay too much time, so in the future
  we may consider a nightly job (this includes code similarity,
  unused parameters, unused functions, and others that generally
  require global analysis).

* Now that we're running more, however, linting takes a while!
  The core Lumi project now takes 26 seconds to lint on my laptop.
  That's not terrible, but it's long enough that we don't want to
  do the silly "run them twice" thing our Makefiles were previously
  doing.  Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
  to rely on the fact that grep returns 1 on "zero lines".

* Finally, fix the many issues that this turned up.

I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
2017-06-22 12:09:46 -07:00
Luke Hoban a63efc42a3 Propagate errors on deployment failures
We were not propagating the error from `deployLatest` through
to the CLI error result.  Despite out recent efforts to integrate
gometalinter, there were also several additional similar cases of
ignored error results reported by `errcheck`.  Not yet clear why
these are not being reported via gometalinter.

Fixes #262.
2017-06-21 22:02:57 -07:00
joeduffy 7fe8052941 Fix some lint in our lint
After 233c5a8 landed, I noticed there are a few things to be fixed up:

    * Run gometalinter in all the right places.  We need to run both in
      lint and lint_quiet targets.  I've also cleaned up some of the logic
      around what to suppress so there's less repetition.

    * We currently @ meaningful commands, which is unfortunate, since it
      makes debugging Makefiles tough (especially when looking at CI build
      logs).  Going forward, we should only use @ for meaningless commands,
      like @echo.

    * The AWS project wasn't actually running tslint, because it needs to
      say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
      The current script of `tslint lib/aws/pack/...` wasn't actually
      running lint, hence we missed a lot of AWS lint issues.

    * Fix up the issues that these fixes uncovered.  Mostly err shadowing.
2017-06-21 13:24:35 -07:00
joeduffy f6f166a5c3 Add a TODO for pulumi/lumi#260 2017-06-21 11:33:10 -07:00
joeduffy 97deabb9bd Finish interface for reading configuration¬
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface.  In summary:

    * Instead of using the NullSource for destructions -- which
      doesn't hook up an interpreter and so any reads of configuration
      variables will fail -- we will enlighten the EvalSource to know
      how to orchestrate destruction interpretation.  The primary
      difference is that we don't actually run the code, but *we do*
      perform all of the necessary configuration and variable init.

    * Associate the active interpreter with the plugin context as
      we are executing, so that the host object can actually read the
      state from the heap as requested to do so by attached plugins.

    * Rename anything "engine" related to use the term "host"; this
      avoids introducing unnecesarily new terminology.

    * Add a new pkg/resource/provider/ package where we can begin
      consolidating helper functionality for resource providers.
      Right now, this includes a wrapper interface atop the gRPC
      machinery necessary to contact the host, in addition to a
      Main function that hides some boilerplate entrypoint code.

    * Add a rpcutil.IsBenignCloseErr routine to let us ignore
      "benign" gRPC errors that are knowingly returned at shutdown.

This commit completes pulumi/lumi#117.
2017-06-21 10:31:06 -07:00
joeduffy e5f229aad8 Use Location.Read from plugin host
This addresses CR feedback from @lukehoban; namely, that we should
be going through the Read API for location reads in the plugin host
to ensure that getters are invoked as appropriate.

I also made Location's various fields private so that we aren't
tempted to make this mistake elsewhere, effectively "forcing" us
to go through the accessor methods.
2017-06-21 08:43:05 -07:00
joeduffy d7093188f0 Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.

The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible.  We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-20 19:45:07 -07:00
joeduffy 26cf93f759 Implement get functions on all resources
This change implements the `get` function for resources.  Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.

For example, we can now look up a SecurityGroup from its ARN:

    let group = aws.ec2.SecurityGroup.get(
        "arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");

The returned object is a fully functional resource object.  So, we can then
link it up with an EC2 instance, for example, in the usual ways:

    let instance = new aws.ec2.Instance(..., {
        securityGroups: [ group ],
    });

This didn't require any changes to the RPC or provider model, since we
already implement the Get function.

There are a few loose ends; two are short term:

    1) URNs are not rehydrated.
    2) Query is not yet implemented.

One is mid-term:

    3) We probably want a URN-based lookup function.  But we will likely
       wait until we tackle pulumi/lumi#109 before adding this.

And one is long term (and subtle):

    4) These amount to I/O and are not repeatable!  A change in the target
       environment may cause a script to generate a different plan
       intermittently.  Most likely we want to apply a different kind of
       deployment "policy" for such scripts.  These are inching towards the
       scripting model of pulumi/lumi#121, which is an entirely different
       beast than the repeatable immutable infrastructure deployments.

Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-19 17:29:02 -07:00
joeduffy 7f710a241f Better handle errors during config apply 2017-06-16 15:37:58 -07:00
joeduffy 5f9ed13069 Simplify Check; make it tolerant of computed values
This change simplifies the generated Check interface for providers.
Instead of

    Check(ctx context.Context, obj *T) ([]error, error)

where T is the resource type, we have

    Check(ctx context.Context, obj *T, property string) error

This is done so that we can drive the calls to Check one property
at a time, allowing us to skip any that are computed.  (Otherwise,
we may fail the verification erroneously.)

This has the added advantage that the Check implementations are
simpler and can simply return a single error.  Furthermore, the
generated RPC code handles wrapping the result, so we can just do

    return errors.New("bad");

rather than the previous reflection-laden junk

    return resource.NewFieldError(
        reflect.TypeOf(obj), awsservice.AWSResource_Property,
        errors.New("bad"))
2017-06-16 13:34:11 -07:00
Luke Hoban 33a9452ece Merge pull request #256 from pulumi/examplestest
Add integration testing for examples
2017-06-16 10:17:51 -07:00
joeduffy 5e85e6d543 Improve property validation diagnostics
The change prints the value of a property that fails resource validation.
This makes it much easier to diagnose what's going on.
2017-06-16 10:04:49 -07:00
joeduffy 7d19abc2a3 Print the current environment
This change implements showing a summary of the current environment.
All you need to do is run

    $ lumi env

and the current environment's information will be printed.

This makes it convenient to grab resource information that might be
required, for instance, to correlate with logs (e.g., lambda ARNs).

Eventually, as per pulumi/lumi#184, we want to print details about
all of the resources too.
2017-06-16 09:46:09 -07:00
Luke Hoban 639a2d323d Test more examples
Tests all of our commonly used examples.

Also sets test parallelism to 10 by default
since we are I/O bound on API calls to
the resource providers.

Also avoids using larger EC2 examples in
our samples so that we can keep our test
costs lower :-).
2017-06-16 09:24:31 -07:00
joeduffy d013501e04 Restructure Rendezvous to have a distinct Let vs. Meet
On the first turn, we want to distinguish between a coroutine
running that owns its turn, and a coroutine that knows it doesn't
own the turn and is simply awaiting its turn.  The old Meet logic
wasn't quite right; instead, we'll have the caller tell us this.
2017-06-15 18:20:12 -07:00
Luke Hoban ae03d69645 Wire up APIs to lambdas using output properties
We now have enough output properties implementation
working to change our API gateway examples and API
wrapper to correctly wire the API routes to the ARNs of
lambdas passed in to them.

We both wire up the lambda to the route, but also create
a permission specific to each route to assign to the
corresponding lambda - providing least privelege needed
for the API definition.

Also adds `string#toUpperCase` and fixes NewUniqueHex
to match how we are using it.
2017-06-15 16:01:00 -07:00
joeduffy 085e39230b Test missing and output properties
This tests the conditions that triggered pulumi/lumi#251.
2017-06-15 13:08:06 -07:00
joeduffy 3e4e791fa4 Also overwrite output property slots
This change overwrites output property slots in runtime objects
after performing a CRUD operation, in addition to null or missing
slots, fixing #251.  The problem is that we sometimes have output
property values pre-populated in an object, and sometimes don't,
depending on various things (both are legal).  We should handle both.
2017-06-15 13:08:06 -07:00
joeduffy 9698309f2b Model resource ID and URN as output properties
This change exposes ID and URN properties on resources, as appropriate,
so that they may be read and used in Lumi scripts.
2017-06-14 17:00:13 -07:00
joeduffy 2ac303f703 Fix deployment hang (pulumi/lumi#246)
The recent change to run the interpreter and planner on separate goroutines
created the need to perform rendezvous-style synchronization between them.
Although the case of an invoked function properly tore down the synchronization
by communicating the error, we seldom directly invoke functions for JavaScript
programs because the way module entrypoint code ends up in initializers.
This requires that we propagate errors correctly out of module and class
initializers, in the standard way, so that the unwind makes its way to the top.

This fixes pulumi/lumi#246.
2017-06-14 15:52:36 -07:00
Luke Hoban 441f32d155 A few tweaks to lint fixes 2017-06-13 16:47:55 -07:00
Luke Hoban 282f40d3e3 Merge branch 'master' into bforsyth927-gometalinter 2017-06-13 16:28:12 -07:00
Britton Forsyth 01003ad48b Implemented highlighted edits 2017-06-13 11:01:23 -07:00
joeduffy 01bbc23144 Only mark immediate outputs as outputs
The primary purposes of this change is to mark only immediate ouptuts
on a resource object as "output" and categories the rest as computed.

It also contains a few minor things:

    * Rebase atop the latest in master.

    * Always marshal unknows as their default value.

    * Permit computed as the existing ID property, in addition to null.

    * Tidy up some asserts.
2017-06-13 07:10:13 -07:00
joeduffy 0d836ae0bd Recover from deployment failures 2017-06-13 07:10:13 -07:00
joeduffy 2eb78e1036 Label replacements as simply "replace" 2017-06-13 07:10:13 -07:00
joeduffy 75a2f14d10 Propagate IDs/outs differently based on step kind
This change updates the ID/output propagation logic to properly handle
the case of replacements, in addition to accurately conveying the fact
that an update may change the values of output properties (but not the ID).

Also fixes a formatting issue with the replacement diffing displays.
2017-06-13 07:10:13 -07:00
joeduffy 25c52a04c5 Tidy up some loose ends
This removes some loose ends and reimplements `lumi pack eval`.
2017-06-13 07:10:13 -07:00
joeduffy dd9e6b35f4 Introduce an OpSame planning step
This change introduces an OpSame planning step.  The reason we need
this is so that we can apply the necessary output properties, including
the ID, even as we are simply walking the plan (i.e., when we aren't
actually performing a deployment).  This ensures that the object state
evolves as required to let reads of output properties propagate in the
ways necessary to reproduce past executions of the program.
2017-06-13 07:10:13 -07:00
joeduffy 9d73c54994 Run post-construction hooks before freezing
We need to run the post-construction hook *before* freezing an object's
readonly properties, since the hook will actually mutate the object in
the case of a deployment (it stores the output properties).  In a sense,
this hook simply becomes an extension of the object's constructor.
2017-06-13 07:10:13 -07:00
joeduffy d1414af321 Fix a few minor things; clean stuff up
* Assert new things in new places.

* Log more interesting tidbits during evaluation.

* Invoke the OnStart hook before triggering initializers.

* Tolerate nil prev snapshots during deletion calculation.

* Handle and serialize missing resource IDs as output props.

* Return "done" flag from Rendezvous.Meet.
2017-06-13 07:10:13 -07:00
joeduffy d277dd5800 More progress on pulumi/lumi#90
This change refactors a number of aspects of the CLI's treatment of
steps, in line with the new scheme, and a number of other miscellaneous
and minor fixes.  It also regenerates all RPC code impacted by recent renames.
2017-06-13 07:10:13 -07:00
joeduffy d044720045 Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.

The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.

The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine.  It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.

There are two other sources, however.  First, a nullSource, which simply
refuses to create new objects.  This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment.  Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.

Boatloads of code is now changed and updated in the various CLI commands.

This further chugs along towards pulumi/lumi#90.  The end is in sight.
2017-06-13 07:10:13 -07:00
joeduffy 6b2408e086 Rewrite plans and deployments
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.

The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
2017-06-13 07:10:13 -07:00
joeduffy c53ddeb678 Overhaul resources, planning, and environments
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.

The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages.  This results in:

- pkg/resource: just the core resource data structures.

- pkg/resource/deployment: all planning and deployment logic.

- pkg/resource/environment: all environment, configuration, and
      serialized checkpoint structures and logic.

- pkg/resource/plugin: all dynamically loaded analyzer and
      provider logic, including the actual loading and RPC mechanisms.

This also splits the resource abstraction up.  We now have:

- resource.Resource: a shared interface.

- resource.Object: a resource that is connected to a live object
      that will periodically observe mutations due to ongoing
      evaluation of computations.  Snapshots of its state may be
      taken; however, this is purely a "pre-planning" abstraction.

- resource.State: a snapshot of a resource's state that is frozen.
      In other words, it is no longer connected to a live object.
      This is what will store provider outputs (ID and properties),
      and is what may be serialized into a deployment record.

The branch is in a half-baked state as of this change; more changes
are to come...
2017-06-13 07:10:13 -07:00
Luke Hoban 9bd441be05 Support for nested lambdas and node_modules
LumiJS lambdas can now be serialized when they include calls to other LumiJS lambdas.  The chain of lambda dependencies is jointly serialized into the target Lambda.

Also, LumiJS lambdas now include `node_modules` automatically in the AWS Lambda, ensuring the the runtime execution environment more closely matches the deployment time environment.

An early version of the gh-cicd example supporting #134 is added which uses these capabilities, currently including a mocked GitHub resource provider.
2017-06-12 10:15:20 -07:00
Britton Forsyth 69e4834f63 Merge branch 'master' into gometalinter 2017-06-09 14:34:51 -07:00
Britton Forsyth 8c8f68d4ae Added specified changes 2017-06-09 12:51:31 -07:00
Britton Forsyth 3066fcda78 Implemented suggested edits 2017-06-08 11:44:16 -07:00
Britton Forsyth 7457cadf58 Fixed various additional linting issues 2017-06-08 10:21:17 -07:00
joeduffy 2ab2b09474 Introduce object resources
This change slightly refactors the way resources are created and
implemented.  We now have two implementations of the Resource interface:

* `resource` (in resource_value.go), which is a snapshot of a resource's
  state.  All values are resolved and there is no live reference to any
  heap state or objects.  This will be used when serializing and/or
  deserializing snapshots of deployments.

* `objectResource` (in resource_object.go), which is an implementation
  of the Resource interface that wraps an underlying, live runtime object.
  This currently introduces no functional difference, as fetching Inputs()
  amounts to taking a snapshot of the full state.  But this at least
  gives us a leg to stand on in making sure that output properties are
  read at the right times during evaluation.

This is a fundamental part of pulumi/lumi#90.
2017-06-08 09:26:06 -07:00
joeduffy 027fe0766c Rename computed/output's eventual property to element
This rename now ensures the name mirrors that in the type/symbol layer.
2017-06-08 06:48:23 -07:00
joeduffy dce2fae4c2 Track implicated objects
This change begins to track objects that are implicated in the
creation of computed values.  This ultimately translates into the
resource URNs which are used during dependency analysis and
serialization.  This is part of pulumi/lumi#90.
2017-06-08 06:46:28 -07:00
joeduffy fd719d64cd Fix pulumi/lumi#198
This change fixes the serialization of resource properties during
deployment checkpoints.  We erroneously serialized empty arrays and
empty maps as though they were nil; instead, we want to keep them
serialized as non-nil empty collections, since the presence of a
value might be semantically meaningful.  (We still skip nils.)

Also added some test cases.
2017-06-06 16:42:14 -07:00
joeduffy ec2b964daa Do an initial pass over TODOs
This scrubs about 80% of our TODOs, as part of pulumi/lumi#212.
The remaining 20% will come shortly.
2017-06-05 18:11:51 -07:00
joeduffy db99092334 Implement mapper.Encode "for real"
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment).  It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.

During this, I took the opportunity to also clean up a few other things
that had been bugging me.  Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI.  Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type.  Even better, we
no longer require resource providers to deal with the mapper
infrastructure.  Instead, the `Check` function can simply return an
array of errors.  It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.

As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.

This completes pulumi/lumi#183.
2017-06-05 17:49:00 -07:00
joeduffy 87004a124e Store both input and output properties distinctly
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties.  This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.

This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.

I believe this fixes both #189 and #198.
2017-06-04 19:24:48 -07:00
joeduffy f552832a7a Alter diag.Message to discourage format mistakes
This change alters diag.Message to not format strings and, instead,
encourages developers to use the Infof, Errorf, and Warningf varargs
functions.  It also tests that arguments are never interepreted as
format strings.
2017-06-02 18:37:28 -07:00
joeduffy ff37f0b8f9 Fix two lint issues that crept in 2017-06-02 09:05:10 -07:00
joeduffy e1673dfdb0 Add some basic plan tests 2017-06-02 08:53:40 -07:00
joeduffy ce35fc78cf Add some property diff tests 2017-06-01 15:36:22 -07:00
Luke Hoban 5358080ca6 Output property improvements for AWS Function and Role
The AssumeRolePolicyDocument property returned by the AWS IAM GetRole API returns
a URL-encoded JSON string, so we need to decode this before JSON unmarshalling.

The Code property returned by AWS Lambda GetFunction provides a pre-signed S3 URL,
which changes on each call, and is of a different format to what is provided by the user.
For now, we'll not store this back into the Function object.

Add additional output properties to AWS Lambda Function that are stable values returned
from GetFunction.

Also corrects a gap where some property delete operations were not being correctly reported.
2017-06-01 15:04:37 -07:00
joeduffy b07056ab10 Create a plan plugin host
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior.  This is required to write tests that mock
certain aspects, like loading packages from the filesystem.

In theory, this change incurs zero behavioral changes.
2017-06-01 11:41:24 -07:00
joeduffy 7b5f9df917 Make updates work in the face of output properties
This change fixes up a few things so that updates correctly deal
with output properties.  This involves a few things:

    1) All outputs stored on the pre snapshot need to get propagated
       to the post snapshot during planning at various points.  This
       ensures that the diffing logic doesn't need to be special cased
       everywhere, including both the Lumi and the provider sides.

    2) Names are changed to "input" properties (using a new `lumi` tag
       option, `in`).  These are properties that providers are expected
       to know nothing about, which we must treat with care during diffs.

    3) We read back properties, via Get, after doing an Update just like
       we do after performing a Create.  This ensures that if an update
       has a cascading impact on other properties, it will be detected.

    4) Inspecting a change, prior to updating, must be done using the
       computed property set instead of the real one.  This is to avoid
       mutating the resource objects ahead of actually applying a plan,
       which would be wrong and misleading.
2017-06-01 10:09:52 -07:00
joeduffy b5df277815 Fix a few merges when this branch hit master 2017-06-01 08:51:33 -07:00
joeduffy 23493be8af Classify output properties as adds too 2017-06-01 08:39:48 -07:00
joeduffy e84c2d9388 Remember output properties in snapshot records
This change remembers which properties were computed as outputs,
or even just read back as default values, during a deployment.  This
information is required in the before/after comparison in order to
perform an intelligent diff that doesn't flag, for example, the absence
of "default" values in the after image as deletions (among other things).
As I was in here, I also cleaned up the way the provider interface
works, dealing with concrete resource types, making it feel a little
richer and less like we're doing in-memory RPC.
2017-06-01 08:39:48 -07:00
joeduffy 08ca40c6c6 Unmap properties on the receive side
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary.  The issues here are twofold:

    1. Properties need to get unmapped using a JSON-tag-sensitive
       marshaler, so that they are cased properly, etc.  For that, we
       have a new mapper.Unmap function (which is ultra lame -- see
       pulumi/lumi#138).

    2. We have the reverse problem with respect to resource IDs: on
       the send side, we must translate from URNs (which the engine
       knows about) and provider IDs (which the provider knows about);
       similarly, then, on the receive side, we must translate from
       provider IDs back into URNs.

As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
2017-06-01 08:39:48 -07:00
joeduffy 87ad371107 Only flow logging to plugins if --logflow
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors.  After this change, we will only flow if
--logflow is passed, e.g. as in

    $ lumi --logtostderr --logflow -v=9 deploy ...
2017-06-01 08:37:56 -07:00
joeduffy 0a72d5360a Modify provider creates; use get for outs
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so.  As a result, this change takes an initial
whack at implementing Get on all existing AWS resources.  The Get API needs
to return a fully populated structure containing all inputs and outputs.

Believe it or not, this is actually part of pulumi/lumi#90.

This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.

This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties.  For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
2017-06-01 08:36:43 -07:00
joeduffy 7f98387820 Distinguish between computed and output properties
This change introduces the notion of a computed versus an output
property on resources.  Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan.  Specifically:

* An output property is any property that is populated by the resource
  provider, not code running in the Lumi type system.  Because these
  values aren't available during planning -- since we have not yet
  performed the deployment operations -- they will be latent values in
  our runtime and generally missing at the time of a plan.  This is no
  problem and we just want to avoid marshaling them in inopportune places.

* A computed property, on the other hand, is a different beast altogehter.
  Although true one of these is missing a value -- by virtue of the fact
  that they too are latent values, bottoming out in some manner on an
  output property -- they will appear in serializable input positions.
  Not only must we treat them differently during the RPC handshake and
  in the resource providers, but we also want to guarantee they are gone
  by the time we perform any CRUD operations on a resource.  They are
  purely a planning-time-only construct.
2017-06-01 08:36:43 -07:00
joeduffy d79c41f620 Initial support for output properties (1 of 3)
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.

In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways.  Namely,
operations against Latent<T>s generally produce new Latent<U>s.

During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values.  An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange.  As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values.  My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.

For now, using an unresolved Latent<T> in a conditional will lead
to an error.  See pulumi/lumi#67.  Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support.  That is a missing 1/3.

Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values.  Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
2017-06-01 08:32:12 -07:00
Luke Hoban 9531483e19 Add tests for AWS DynamoDB Table provider 2017-05-31 17:06:16 -07:00
joeduffy ab6e2466c7 Flow logging information to plugins
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels.  I've
found this handy for debugging purposes.
2017-05-30 10:19:33 -07:00
Luke Hoban 7f8b1e59c1 Support for lambdas (#158)
Resolves #137.

This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.

A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.

LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.

Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
2017-05-25 16:55:14 -07:00
Luke Hoban 35a41f9e4a Support Update on IAM Role and Lambda Function 2017-05-22 22:57:55 -07:00
joeduffy 3bad4fde98 Disable storing output properties
The storing of output properties won't work correctly until pulumi/lumi#90
is completed.  The update logic sees properties that weren't supplied by the
developer and thinks this means an update is required; this is easy to fix
but better to just roll into the overall pending change that will land soon.
2017-05-22 13:20:57 -07:00
Luke Hoban 0f99762e2e Add AWS Elastic Beanstalk resource providers (#154)
Includes support for:
* Application
* ApplicationVersion
* Environment
2017-05-21 21:45:28 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy eee0f3b717 Fix some golint warnings 2017-05-13 20:04:35 -04:00
joeduffy 7d8aadeb1e Implement chronological stable object keys
We need a stable object key enumeration order and we might as well leverage
ECMAScript's definition for this.  As of ES6, key ordering is specified; see
https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys.

I haven't fully implemented the "numbers come first part" (we can do this as
soon as we have support for Object.keys()), but the chronological part works.
2017-05-06 16:09:49 -07:00
joeduffy 1e67162331 Fix a couple silly mistakes 2017-05-04 09:53:52 -07:00
joeduffy 6902d7e1b2 Update AWS Lambdas to take archives, not assets 2017-05-01 09:38:23 -07:00
joeduffy 335ea01275 Implement archives
Our initial implementation of assets was intentionally naive, because
they were limited to single-file assets.  However, it turns out that for
real scenarios (like lambdas), we want to support multi-file assets.

In this change, we introduce the concept of an Archive.  An archive is
what the term classically means: a collection of files, addressed as one.
For now, we support three kinds: tarfile archives (*.tar), gzip-compressed
tarfile archives (*.tgz, *.tar), and normal zipfile archives (*.zip).

There is a fair bit of library support for manipulating Archives as a
logical collection of Assets.  I've gone to great length to avoid making
copies, however, sometimes it is unavoidable (for example, when sizes
are required in order to emit offsets).  This is also complicated by the
fact that the AWS libraries often want seekable streams, if not actual
raw contiguous []byte slices.
2017-04-30 12:37:24 -07:00
joeduffy fe93f5e76f Strongly type resource IDs in the IDL/RPC/Providers
This change simply uses the `resource.ID` type in all the places
where it belongs, rather than using `string`-typed resource IDs.
2017-04-29 13:27:39 -07:00
joeduffy 40ac0a1d2b Support assets in the IDL 2017-04-28 12:07:49 -07:00
joeduffy 47ef3f673b Rename PreviewUpdate (again)
Unfortunately, this wasn't a great name.  The old one stunk, but the
new one was misleading at best.  The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them.  InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.
2017-04-27 11:18:49 -07:00
joeduffy 507a2609a7 Add an initial implementation of CIDLC
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.

I've been kicking the tires with this locally enough to checkpoint the
current version.  There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
2017-04-25 15:05:51 -07:00
joeduffy 8c58950639 Tolerate nils in output property marshaling 2017-04-25 14:04:22 -07:00
joeduffy 1edced2d4b Add the ability to convert structs to PropertyMaps 2017-04-21 15:27:32 -07:00
joeduffy d6abea728c Add outputs to the Create provider's return
In order to support output properties (pulumi/coconut#90), we need to
modify the Create gRPC interface for resource providers slightly.  In
addition to returning the ID, we need to also return any properties
computed by the AWS provider itself.  For instance, this includes ARNs
and IDs of various kinds.  This change simply propagates the resources
but we don't actually support reading the outputs just yet.
2017-04-21 14:15:06 -07:00
joeduffy 0b6e262b46 Rename resource provider methods
This change renames two provider methods:

    * Read becomes Get.

    * UpdateImpact becomes PreviewUpdate.

These just read a whole lot nicer than the old names.
2017-04-20 14:09:00 -07:00
joeduffy 53e7bfbb86 Rearrange the way stderr/stdout is handled for plugins
The order of operations for stderr/stdout monitoring with plugins
managed to hide some important errors.  For example, if something
was written to stderr *before* the port was parsed from stdout, a
very possible scenario if the plugin fails before it has even
properly sarted, then we would silently drop the stderr on the floor
leaving behind no indication of what went wrong.  The new ordering
ensures that stderr is never ignored with some minor improvements
in the case that part of the port is parsed from stdout but it
ultimately ends in an error (e.g., if an EOF occurs prematurely).
2017-04-19 15:01:04 -07:00
joeduffy f429bc6a0c Use github.com/pkg/errors for errors
This change moves us over to the github.com/pkg/errors package to
encourage the addition of more context associated with failures.
2017-04-19 14:46:50 -07:00
joeduffy 958d67d444 Move NewCheckResponse into coconut/pkg/resource package 2017-04-19 14:25:49 -07:00
joeduffy 973fccf09d Implement AWS Lambda resource provider
This change introduces a basic AWS Lambda resource provider.  It supports
C--D, but not -RU-, yet.
2017-04-18 11:02:04 -07:00
joeduffy e8d7ef620f Add a couple missing trace errs 2017-04-17 18:05:12 -07:00
joeduffy 30237bb28f Regen Glide lock; fix two govet mistakes 2017-04-17 17:04:00 -07:00
joeduffy b3f430186d Implement S3 bucket objects
This change includes a first basic whack at implementing S3 bucket
objects.  It leverages the assets infrastructure put in place in the
last commit, supporting uploads from text, files, or arbitrary URIs.

Most of the interesting object properties remain unsupported for now,
but with this we can upload and delete basic S3 objects, sufficient
for a lot of the lambda functions management we need to implement.
2017-04-17 13:34:19 -07:00
joeduffy 67248789b3 Introduce assets
This change introduces the basic concept of assets.  It is far from
fully featured, however, it is enough to start adding support for various
storage kinds that require access to I/O-backed data (files, etc).

The challenge is that Coconut is deterministic by design, and so you
cannot simply read a file in an ad-hoc manner and present the bytes to
a resource provider.  Instead, we will model "assets" as first class
entities whose data source is described to the system in a more declarative
manner, so that the system and resource providers can manage them.

There are three ways to create an asset at the moment:

1. A constant, in-memory string.
2. A path to a file on the local filesystem.
3. A URI, whose scheme is extensible.

Eventually, we want to support byte blobs, but due to our use of a
"JSON-like" type system, this isn't easily expressible just yet.

The URI scheme is extensible in that file://, http://, and https://
are supported "out of the box", but individual providers are free to
recognize their own schemes and support them.  For instance, copying
one S3 object to another will be supported simply by passing a URI
with the s3:// protocol in the usual way.

Many utility functions are yet to be written, but this is a start.
2017-04-17 13:00:26 -07:00
joeduffy 9c1ea1f161 Fix some poor hygiene
A few linty things crept in; this addresses them.
2017-04-08 07:44:02 -07:00
joeduffy 015730e9a9 Fix a bogus unchanged lookup
We need to look for the "old" resource, not the "new" one, when verifying
an assertion that a dependency that is seemingly unchanged actually is.
2017-03-15 16:46:07 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 80d19d4f0b Use the object mapper to reduce provider boilerplate
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one.  As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
2017-03-12 14:13:44 -07:00
joeduffy 705880cb7f Add the ability to specify analyzers
This change adds the ability to specify analyzers in two ways:

1) By listing them in the project file, for example:

        analyzers:
            - acmecorp/security
            - acmecorp/gitflow

2) By explicitly listing them on the CLI, as a "one off":

        $ coco deploy <env> \
            --analyzer=acmecorp/security \
            --analyzer=acmecorp/gitflow

This closes out pulumi/coconut#119.
2017-03-11 10:07:34 -08:00
joeduffy 45064d6299 Add basic analyzer support
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119.  In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource.  The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules).  These run simultaneous to overall checking.

Analyzers are loaded as plugins just like providers are.  The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.

This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
2017-03-10 23:49:17 -08:00
joeduffy 807d355d5a Rename plugin prefix from coco-ressrv to coco-resource 2017-03-10 20:48:09 -08:00
joeduffy 3b3b56a836 Properly reap child processes
This change reaps child plugin processes before exiting.  It also hardens
some of the exit paths to avoid os.Exiting from the middle of a callstack.
2017-03-07 13:47:42 +00:00
joeduffy 86dc13ed5b More term rotations
This changes a few naming things:

* Rename "husk" to "environment" (`coco env` for short).

* Rename NutPack/NutIL to CocoPack/CocoIL.

* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.

* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.

* Rename the package asset directory from nutpack/ to .coconut/.
2017-03-06 14:32:39 +00:00
joeduffy 6194a59798 Add a pre-pass to validate resources before creating/updating
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties.  This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late).  My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.

This resolves pulumi/coconut#107, although we aren't using this
in any resource providers just yet.  I'll add a work item now for that...
2017-03-02 18:15:38 -08:00
joeduffy adf852dd84 Fix an off by one (duhhh) 2017-03-02 17:15:13 -08:00
joeduffy 076d689a05 Rename Monikers to URNs
This change is mostly just a rename of Moniker to URN.  It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.

This is a minor step that helps to prepare us for pulumi/coconut#109.
2017-03-02 17:10:10 -08:00
joeduffy 2ce75cb946 Make security group changes imply replacement 2017-03-02 16:16:18 -08:00
joeduffy 966969945b Add a resource.NewUniqueHex API (and use it)
This change adds a new resource.NewUniqueHex API, that simply generates
a unique hex string with the given prefix, with a specific count of
random bytes, and optionally capped to a maximum length.

This is used in the AWS SecurityGroup resource provider to avoid name
collisions, which is especially important during replacements (otherwise
we cannot possibly create a new instance before deleting the old one).

This resolves pulumi/coconut#108.
2017-03-02 16:02:41 -08:00
joeduffy 523c669a03 Track which updates triggered a replacement
This change tracks which updates triggered a replacement.  This enables
better output and diagnostics.  For example, we now colorize those
properties differently in the output.  This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
2017-03-02 15:24:39 -08:00
joeduffy f0d9b12a3c Don't emit logical step resources while checkpointing 2017-03-02 13:14:57 -08:00
joeduffy bd613a33e6 Make replacement first class
This change, part of pulumi/coconut#105, rearranges support for
resource replacement.  The old model didn't properly account for
the cascading updates and possible replacement of dependencies.

Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion.  This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step.  The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps).  It also makes it
easier to fold all of the edges into a single linchpin node.

As part of this, the update step no longer gets to choose whether
to recreate the resource.  Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
2017-03-02 09:52:08 -08:00
joeduffy df3c0dcb7d Display and colorize replacements distinctly 2017-03-01 13:34:29 -08:00
joeduffy fe0bb4a265 Support replacement IDs
This change introduces a new RPC function to the provider interface;
in pseudo-code:

    UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
        (bool, PropertyMap, error)

Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update.  This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).

The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect).  This ensures the overall DAG / schedule is ordered correctly.

This change is most of pulumi/coconut#105.  The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
2017-03-01 09:08:53 -08:00
joeduffy a4e806a07c Remember old moniker to ID mappings
For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID.  This change implements support for remembering/recovering them.
2017-02-28 17:03:33 -08:00
joeduffy cf2788a254 Allow restarting from partial failures
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new.  This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
2017-02-28 16:09:56 -08:00
joeduffy 6a2edc9159 Ensure configuration round-trips in Huskfiles 2017-02-28 15:43:46 -08:00
joeduffy 1c43abffec Fix some go vet issues 2017-02-28 11:02:33 -08:00
joeduffy 7f0a97a4e3 Print configuration variables; etc.
This change does a few things:

* First and foremost, it tracks configuration variables that are
  initialized, and optionally prints them out as part of the
  prelude/header (based on --show-config), both in a dry-run (plan)
  and in an actual deployment (apply).

* It tidies up some of the colorization and messages, and includes
  nice banners like "Deploying changes:", etc.

* Fix an assertion.

* Issue a new error

      "One or more errors occurred while applying X's configuration"

  just to make it easier to distinguish configuration-specific
  failures from ordinary ones.

* Change config keys to tokens.Token, not tokens.ModuleMember,
  since it is legal for keys to represent class members (statics).
2017-02-28 10:32:24 -08:00
joeduffy d91b04d8f4 Support config maps
This change adds support for configuration maps.

This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation.  There is nothing special with this
code as it could have been authored by a user.  But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).

For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2.  From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":

    prod-west.json:
    {
        "husk": "prod-west",
        "config": {
            "aws:config:region": "us-west-1"
        }
    }

    prod-east.json:
    {
        "husk": "prod-east",
        "config": {
            "aws:config:region": "us-east-2"
        }
    }

Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents.  As a result, the static variable
"region" in the "aws:config" package will have the desired value.

This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns.  Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
2017-02-27 19:43:54 -08:00
joeduffy 9b55505463 Implement AWS security group updates 2017-02-27 13:33:08 -08:00
joeduffy eca5c38406 Fix a handful of update-related issues
* Delete husks if err == nil, not err != nil.

* Swizzle the formatting padding on array elements so that the
  diff modifier + or - binds more tightly to the [N] part.

* Print the un-doubly-indented padding for array element headers.

* Add some additional logging to step application (it helped).

* Remember unchanged resources even when glogging is off.
2017-02-27 11:27:36 -08:00
joeduffy 3bdbf17af2 Rename --show-sames to --show-unchanged
Per Eric's feedback.
2017-02-27 11:08:14 -08:00
joeduffy 09e328e3e6 Extract settings from the correct old/new snapshot 2017-02-27 11:02:39 -08:00
joeduffy afbd40c960 Add a --show-sames flag
This change adds a --show-sames flag to `coco husk deploy`.  This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
2017-02-27 10:58:24 -08:00
joeduffy 88fa0b11ed Checkpoint deployments
This change checkpoints deployments properly.  That is, even in the
face of partial failure, we should keep the huskfile up to date.  This
accomplishes that by tracking the state during plan application.

There are still ways in which this can go wrong, however.  Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
2017-02-27 10:26:44 -08:00
joeduffy 604370f58b Propagate IDs from old to new during updates 2017-02-26 13:36:30 -08:00
joeduffy ace693290f Fix the directionality of delete edges 2017-02-26 12:05:49 -08:00
joeduffy 44783cffb7 Don't overwrite unmarshaled deployment info 2017-02-26 12:00:00 -08:00
joeduffy ff3f2232db Only use args if non-nil 2017-02-26 11:53:02 -08:00
joeduffy 2f60a414c7 Reorganize deployment commands
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit.  This
makes targets ("husks") more of a first class thing.

Namely, you must first initialize a husk before using it:

    $ coco husk init staging
    Coconut husk 'staging' initialized; ready for deployments

Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments.  The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:

    $ ... make some changes ...
    $ coco husk deploy staging
    ... standard deployment progress spew ...

Finally, should you want to teardown an entire environment:

    $ coco husk destroy staging
    ... standard deletion progress spew for all resources ...
    Coconut husk 'staging' has been destroyed!
2017-02-26 11:20:14 -08:00
joeduffy 2fec5f74d5 Make DeepEquals nullary logic match Diff 2017-02-25 18:24:12 -08:00
joeduffy 977b16b2cc Add basic targeting capability
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update.  This simplifies the management of deployment
records, checkpoints, and snapshots.

I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming).  The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:

    nutpack/
        bin/
            Nutpack.json
            ... any other compiled artifacts ...
        husks/
            ... one snapshot per husk ...

For example, if we had a stage and prod husk, we would have:

    nutpack/
        bin/...
        husks/
            prod.json
            stage.json

In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment.  These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.

The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
2017-02-25 09:24:52 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy e0440ad312 Print step op labels 2017-02-24 17:44:54 -08:00
joeduffy b43c374905 Fix a few more things about updates
* Eliminate some superfluous "\n"s.

* Remove the redundant properties stored on AWS resources.

* Compute array diff lengths properly (+1).

* Display object property changes from null to non-null as
  adds; and from non-null to null as deletes.

* Fix a boolean expression from ||s to &&s.  (Bone-headed).
2017-02-24 17:02:02 -08:00
joeduffy 53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy 877fa131eb Detect duplicate object names
This change detects duplicate object names (monikers) and issues a nice
error message with source context include.  For example:

    index.ts(260,22): error MU2006: Duplicate objects with the same name:
        prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group

The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:

    let group1 = new aws.ec2.SecurityGroup("group", { ... });
    let group2 = new aws.ec2.SecurityGroup("group", { ... });
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
2017-02-24 16:03:06 -08:00
joeduffy 14e3f19437 Implement name property in AWS provider/library 2017-02-24 15:41:56 -08:00
joeduffy c120f62964 Redo object monikers
This change overhauls the way we do object monikers.  The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions.  The new approach mixes some amount of "automatic scoping"
plus some "explicit naming."  Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.

Each moniker has four parts:

    <Namespace>::<AllocModule>::<Type>::<Name>

wherein each element is the following:

    <Namespace>     The namespace being deployed into
    <AllocModule>   The module in which the object was allocated
    <Type>          The type of the resource
    <Name>          The assigned name of the resource

The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.

The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created.  In the future, we may wish to
extend this to also track the module under evaluation.  (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)

The <Name> warrants more discussion.  The resource provider is consulted
via a new gRPC method, Name, that fetches the name.  How the provider
does this is entirely up to it.  For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).

This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
2017-02-24 14:50:02 -08:00
joeduffy 9dc75da159 Diff and colorize update outputs
This change implements detailed object diffing for puposes of displaying
(and colorizing) updated properties during an update deployment.
2017-02-23 19:03:22 -08:00
joeduffy c4d1f60a7e Eliminate a superfluous map allocation 2017-02-23 15:05:30 -08:00
joeduffy 86bfe5961d Implement updates
This change is a first whack at implementing updates.

Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order.  For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies).  These are just special cases of the more
general idea of performing DAG operations in dependency order.

Updates must work in terms of this more general notion.  For example:

* It is an error to delete a resource while another refers to it; thus,
  resources are deleted after deleting dependents, or after updating
  dependent properties that reference the resource to new values.

* It is an error to depend on a create a resource before it is created;
  thus, resources must be created before dependents are created, and/or
  before updates to existing resource properties that would cause them
  to refer to the new resource.

Of course, all of this is tangled up in a graph of dependencies.  As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.

To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
2017-02-23 14:56:23 -08:00
joeduffy f00b146481 Echo resource provider outputs
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it.  In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error.  This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
2017-02-22 18:53:36 -08:00
joeduffy 8610a70ca4 Implement stable resource ordering
This change adds custom serialization and deserialization to preserve
the ordering of resources in snapshot files.  Unfortunately, because
Go maps are unordered, the results are scrambled (actually, it turns out,
Go will sort by the key).  An alternative would be to resort the results
after reading in the file, or storing as an array, but this would be a
change to the MuGL file format and is less clear than simply keying each
resource by its moniker as we are doing today.
2017-02-22 17:33:08 -08:00
joeduffy ae99e957f9 Fix a few messages and assertions 2017-02-22 14:43:08 -08:00
joeduffy 9c2013baf0 Implement resource snapshot deserialization 2017-02-22 14:32:03 -08:00
joeduffy 8d71771391 Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly.  This is largely
in preparation for performing deletes and updates of existing environments.

The old way was slightly confusing and made things appear more "magical"
than they actually are.  Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.

The new way is to offer three commands: create, update, and delete.  Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely.  The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.

Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands.  This will print out the plan without
actually performing any opterations.

All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 11:21:26 -08:00
joeduffy d4911ad6f6 Implement snapshot MuGL
This change adds support for serializing snapshots in MuGL, per the
design document in docs/design/mugl.md.  At the moment, it is only
exposed from the `mu plan` command, which allows you to specify an
output location using `mu plan --output=file.json` (or `-o=file.json`
for short).  This serializes the snapshot with monikers, resources,
and so on.  Deserialization is not yet supported; that comes next.
2017-02-21 18:31:43 -08:00
joeduffy 0efb8bdd69 Fix a few things
* Specify MinCount/MaxCount when creating an EC2 instance.  These
  are required properties on the RunInstances API.

* Only attempt to unmarshal egressArray/ingressArray when non-nil.

* Remember the context object on the instanceProvider.

* Move the moniker and object maps into the shared context object.

* Marshal object monikers as the resource IDs to which they refer,
  since monikers are useless on "the other side" of the RPC boundary.
  This ensures that, for example, the AWS provider gets IDs it can use.

* Add some paranoia assertions.
2017-02-20 13:55:09 -08:00
joeduffy 81158d0fc2 Make property logic nil-sensitive
...and add some handy plugin-oriented logging.
2017-02-20 13:27:31 -08:00
joeduffy a9085ece0f Trace plugin STDOUT/STDERR 2017-02-20 12:34:15 -08:00
joeduffy 276b6c253d Implement a basic AWS resource provider
This commit includes a basic AWS resource provider.  Mostly it is just
scaffolding, however, it also includes prototype implementations for EC2
instance and security group resource creation operations.
2017-02-20 11:18:47 -08:00
joeduffy dbd1721ced Properly detect "missing file from PATH" os/exec errors 2017-02-19 12:23:26 -08:00
joeduffy 3618837092 Add plan apply progress reporting 2017-02-19 11:58:04 -08:00
joeduffy 09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy 6c25ff5cba Drive plan application
This moves us one step closer from planning to application (marapongo/mu#21).
Namely, we now drive the right resource provider operations in response to
the plan's steps.  Those providers, however, are still empty shells.
2017-02-18 11:54:24 -08:00
joeduffy 9621aa7201 Implement deletion plans
This change adds a flag to `plan` so that we can create deletion plans:

    $ mu plan --delete

This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
2017-02-18 10:33:36 -08:00
joeduffy 6f42e1134b Create object monikers
This change introduces object monikers.  These are unique, serializable
names that refer to resources created during the execution of a MuIL
program.  They are pretty darned ugly at the moment, but at least they
serve their desired purpose.  I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being.  The names right now are
particularly sensitive to simple refactorings.

This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable.  I'd prefer to do that with a bit of
experience under our belts first.
2017-02-18 10:22:04 -08:00
joeduffy d9ee2429da Begin resource modeling and planning
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.

It contains the following key abstractions:

* resource.Provider is a wrapper around the CRUD operations exposed by
  underlying resource plugins.  It will eventually defer to resource.Plugin,
  which itself defers -- over an RPC interface -- to the actual plugin, one
  per package exposing resources.  The provider will also understand how to
  load, cache, and overall manage the lifetime of each plugin.

* resource.Resource is the actual resource object.  This is created from
  the overall evaluation object graph, but is simplified.  It contains only
  serializable properties, for example.  Inter-resource references are
  translated into serializable monikers as part of creating the resource.

* resource.Moniker is a serializable string that uniquely identifies
  a resource in the Mu system.  This is in contrast to resource IDs, which
  are generated by resource providers and generally opaque to the Mu
  system.  See marapongo/mu#69 for more information about monikers and some
  of their challenges (namely, designing a stable algorithm).

* resource.Snapshot is a "snapshot" taken from a graph of resources.  This
  is a transitive closure of state representing one possible configuration
  of a given environment.  This is what plans are created from.  Eventually,
  two snapshots will be diffable, in order to perform incremental updates.
  One way of thinking about this is that a snapshot of the old world's state
  is advanced, one step at a time, until it reaches a desired snapshot of
  the new world's state.

* resource.Plan is a plan for carrying out desired CRUD operations on a target
  environment.  Each plan consists of zero-to-many Steps, each of which has
  a CRUD operation type, a resource target, and a next step.  This is an
  enumerator because it is possible the plan will evolve -- and introduce new
  steps -- as it is carried out (hence, the Next() method).  At the moment, this
  is linearized; eventually, we want to make this more "graph-like" so that we
  can exploit available parallelism within the dependencies.

There are tons of TODOs remaining.  However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.

This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 12:31:48 -08:00
joeduffy 6b60852c76 Generate Golang Protobuf/gRPC code
This change moves the RPC definitions to the pkg/murpc package,
underneath the proto folder.  This is to ensure that the resulting
Protobufs get a good package name "murpc" that is unique within the
overall toolchain.  It also includes a `generate.sh` script that
can be used to manually regenerate client/server code.  Finally,
we are actually checking in the generated files underneath pkg/murpc.
2017-02-10 09:08:06 -08:00
joeduffy 20452a99e1 Add Protobuf definitions for resource providers 2017-02-10 08:55:26 -08:00
joeduffy 8af53d5f69 Burn the ships
This change deletes a bunch of old legacy code.  I've tagged the
prior changelist as "0.1" in case we need to go back and recover
some of the code (as I expect some of the constraint type logic
and AWS-specific code to come in handy down the road).

🔥 🔥 🔥
2017-01-25 17:32:57 -08:00
joeduffy 729af81e44 Move all cloud switching to mu/x MuPackage
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level.  In fact, the whole code-generation
phase of the compiler was based on it.

In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.

Therefore, most of this stuff can go.  I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision.  For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.

I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc).  But for the time being, it's a distraction
getting the core model running.  I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers.  (Although we won't be using
CloudFormation, some of the name generation code might be useful.)  So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
2017-01-20 09:46:59 -08:00