Commit graph

251 commits

Author SHA1 Message Date
joeduffy 0a72d5360a Modify provider creates; use get for outs
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so.  As a result, this change takes an initial
whack at implementing Get on all existing AWS resources.  The Get API needs
to return a fully populated structure containing all inputs and outputs.

Believe it or not, this is actually part of pulumi/lumi#90.

This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.

This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties.  For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
2017-06-01 08:36:43 -07:00
joeduffy 7f98387820 Distinguish between computed and output properties
This change introduces the notion of a computed versus an output
property on resources.  Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan.  Specifically:

* An output property is any property that is populated by the resource
  provider, not code running in the Lumi type system.  Because these
  values aren't available during planning -- since we have not yet
  performed the deployment operations -- they will be latent values in
  our runtime and generally missing at the time of a plan.  This is no
  problem and we just want to avoid marshaling them in inopportune places.

* A computed property, on the other hand, is a different beast altogehter.
  Although true one of these is missing a value -- by virtue of the fact
  that they too are latent values, bottoming out in some manner on an
  output property -- they will appear in serializable input positions.
  Not only must we treat them differently during the RPC handshake and
  in the resource providers, but we also want to guarantee they are gone
  by the time we perform any CRUD operations on a resource.  They are
  purely a planning-time-only construct.
2017-06-01 08:36:43 -07:00
joeduffy d79c41f620 Initial support for output properties (1 of 3)
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.

In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways.  Namely,
operations against Latent<T>s generally produce new Latent<U>s.

During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values.  An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange.  As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values.  My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.

For now, using an unresolved Latent<T> in a conditional will lead
to an error.  See pulumi/lumi#67.  Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support.  That is a missing 1/3.

Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values.  Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
2017-06-01 08:32:12 -07:00
Luke Hoban 9531483e19 Add tests for AWS DynamoDB Table provider 2017-05-31 17:06:16 -07:00
joeduffy ab6e2466c7 Flow logging information to plugins
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels.  I've
found this handy for debugging purposes.
2017-05-30 10:19:33 -07:00
Luke Hoban 7f8b1e59c1 Support for lambdas (#158)
Resolves #137.

This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.

A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.

LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.

Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
2017-05-25 16:55:14 -07:00
Luke Hoban 35a41f9e4a Support Update on IAM Role and Lambda Function 2017-05-22 22:57:55 -07:00
joeduffy 3bad4fde98 Disable storing output properties
The storing of output properties won't work correctly until pulumi/lumi#90
is completed.  The update logic sees properties that weren't supplied by the
developer and thinks this means an update is required; this is easy to fix
but better to just roll into the overall pending change that will land soon.
2017-05-22 13:20:57 -07:00
Luke Hoban 0f99762e2e Add AWS Elastic Beanstalk resource providers (#154)
Includes support for:
* Application
* ApplicationVersion
* Environment
2017-05-21 21:45:28 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy eee0f3b717 Fix some golint warnings 2017-05-13 20:04:35 -04:00
joeduffy 7d8aadeb1e Implement chronological stable object keys
We need a stable object key enumeration order and we might as well leverage
ECMAScript's definition for this.  As of ES6, key ordering is specified; see
https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys.

I haven't fully implemented the "numbers come first part" (we can do this as
soon as we have support for Object.keys()), but the chronological part works.
2017-05-06 16:09:49 -07:00
joeduffy 1e67162331 Fix a couple silly mistakes 2017-05-04 09:53:52 -07:00
joeduffy 6902d7e1b2 Update AWS Lambdas to take archives, not assets 2017-05-01 09:38:23 -07:00
joeduffy 335ea01275 Implement archives
Our initial implementation of assets was intentionally naive, because
they were limited to single-file assets.  However, it turns out that for
real scenarios (like lambdas), we want to support multi-file assets.

In this change, we introduce the concept of an Archive.  An archive is
what the term classically means: a collection of files, addressed as one.
For now, we support three kinds: tarfile archives (*.tar), gzip-compressed
tarfile archives (*.tgz, *.tar), and normal zipfile archives (*.zip).

There is a fair bit of library support for manipulating Archives as a
logical collection of Assets.  I've gone to great length to avoid making
copies, however, sometimes it is unavoidable (for example, when sizes
are required in order to emit offsets).  This is also complicated by the
fact that the AWS libraries often want seekable streams, if not actual
raw contiguous []byte slices.
2017-04-30 12:37:24 -07:00
joeduffy fe93f5e76f Strongly type resource IDs in the IDL/RPC/Providers
This change simply uses the `resource.ID` type in all the places
where it belongs, rather than using `string`-typed resource IDs.
2017-04-29 13:27:39 -07:00
joeduffy 40ac0a1d2b Support assets in the IDL 2017-04-28 12:07:49 -07:00
joeduffy 47ef3f673b Rename PreviewUpdate (again)
Unfortunately, this wasn't a great name.  The old one stunk, but the
new one was misleading at best.  The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them.  InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.
2017-04-27 11:18:49 -07:00
joeduffy 507a2609a7 Add an initial implementation of CIDLC
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.

I've been kicking the tires with this locally enough to checkpoint the
current version.  There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
2017-04-25 15:05:51 -07:00
joeduffy 8c58950639 Tolerate nils in output property marshaling 2017-04-25 14:04:22 -07:00
joeduffy 1edced2d4b Add the ability to convert structs to PropertyMaps 2017-04-21 15:27:32 -07:00
joeduffy d6abea728c Add outputs to the Create provider's return
In order to support output properties (pulumi/coconut#90), we need to
modify the Create gRPC interface for resource providers slightly.  In
addition to returning the ID, we need to also return any properties
computed by the AWS provider itself.  For instance, this includes ARNs
and IDs of various kinds.  This change simply propagates the resources
but we don't actually support reading the outputs just yet.
2017-04-21 14:15:06 -07:00
joeduffy 0b6e262b46 Rename resource provider methods
This change renames two provider methods:

    * Read becomes Get.

    * UpdateImpact becomes PreviewUpdate.

These just read a whole lot nicer than the old names.
2017-04-20 14:09:00 -07:00
joeduffy 53e7bfbb86 Rearrange the way stderr/stdout is handled for plugins
The order of operations for stderr/stdout monitoring with plugins
managed to hide some important errors.  For example, if something
was written to stderr *before* the port was parsed from stdout, a
very possible scenario if the plugin fails before it has even
properly sarted, then we would silently drop the stderr on the floor
leaving behind no indication of what went wrong.  The new ordering
ensures that stderr is never ignored with some minor improvements
in the case that part of the port is parsed from stdout but it
ultimately ends in an error (e.g., if an EOF occurs prematurely).
2017-04-19 15:01:04 -07:00
joeduffy f429bc6a0c Use github.com/pkg/errors for errors
This change moves us over to the github.com/pkg/errors package to
encourage the addition of more context associated with failures.
2017-04-19 14:46:50 -07:00
joeduffy 958d67d444 Move NewCheckResponse into coconut/pkg/resource package 2017-04-19 14:25:49 -07:00
joeduffy 973fccf09d Implement AWS Lambda resource provider
This change introduces a basic AWS Lambda resource provider.  It supports
C--D, but not -RU-, yet.
2017-04-18 11:02:04 -07:00
joeduffy e8d7ef620f Add a couple missing trace errs 2017-04-17 18:05:12 -07:00
joeduffy 30237bb28f Regen Glide lock; fix two govet mistakes 2017-04-17 17:04:00 -07:00
joeduffy b3f430186d Implement S3 bucket objects
This change includes a first basic whack at implementing S3 bucket
objects.  It leverages the assets infrastructure put in place in the
last commit, supporting uploads from text, files, or arbitrary URIs.

Most of the interesting object properties remain unsupported for now,
but with this we can upload and delete basic S3 objects, sufficient
for a lot of the lambda functions management we need to implement.
2017-04-17 13:34:19 -07:00
joeduffy 67248789b3 Introduce assets
This change introduces the basic concept of assets.  It is far from
fully featured, however, it is enough to start adding support for various
storage kinds that require access to I/O-backed data (files, etc).

The challenge is that Coconut is deterministic by design, and so you
cannot simply read a file in an ad-hoc manner and present the bytes to
a resource provider.  Instead, we will model "assets" as first class
entities whose data source is described to the system in a more declarative
manner, so that the system and resource providers can manage them.

There are three ways to create an asset at the moment:

1. A constant, in-memory string.
2. A path to a file on the local filesystem.
3. A URI, whose scheme is extensible.

Eventually, we want to support byte blobs, but due to our use of a
"JSON-like" type system, this isn't easily expressible just yet.

The URI scheme is extensible in that file://, http://, and https://
are supported "out of the box", but individual providers are free to
recognize their own schemes and support them.  For instance, copying
one S3 object to another will be supported simply by passing a URI
with the s3:// protocol in the usual way.

Many utility functions are yet to be written, but this is a start.
2017-04-17 13:00:26 -07:00
joeduffy 9c1ea1f161 Fix some poor hygiene
A few linty things crept in; this addresses them.
2017-04-08 07:44:02 -07:00
joeduffy 015730e9a9 Fix a bogus unchanged lookup
We need to look for the "old" resource, not the "new" one, when verifying
an assertion that a dependency that is seemingly unchanged actually is.
2017-03-15 16:46:07 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 80d19d4f0b Use the object mapper to reduce provider boilerplate
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one.  As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
2017-03-12 14:13:44 -07:00
joeduffy 705880cb7f Add the ability to specify analyzers
This change adds the ability to specify analyzers in two ways:

1) By listing them in the project file, for example:

        analyzers:
            - acmecorp/security
            - acmecorp/gitflow

2) By explicitly listing them on the CLI, as a "one off":

        $ coco deploy <env> \
            --analyzer=acmecorp/security \
            --analyzer=acmecorp/gitflow

This closes out pulumi/coconut#119.
2017-03-11 10:07:34 -08:00
joeduffy 45064d6299 Add basic analyzer support
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119.  In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource.  The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules).  These run simultaneous to overall checking.

Analyzers are loaded as plugins just like providers are.  The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.

This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
2017-03-10 23:49:17 -08:00
joeduffy 807d355d5a Rename plugin prefix from coco-ressrv to coco-resource 2017-03-10 20:48:09 -08:00
joeduffy 3b3b56a836 Properly reap child processes
This change reaps child plugin processes before exiting.  It also hardens
some of the exit paths to avoid os.Exiting from the middle of a callstack.
2017-03-07 13:47:42 +00:00
joeduffy 86dc13ed5b More term rotations
This changes a few naming things:

* Rename "husk" to "environment" (`coco env` for short).

* Rename NutPack/NutIL to CocoPack/CocoIL.

* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.

* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.

* Rename the package asset directory from nutpack/ to .coconut/.
2017-03-06 14:32:39 +00:00
joeduffy 6194a59798 Add a pre-pass to validate resources before creating/updating
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties.  This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late).  My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.

This resolves pulumi/coconut#107, although we aren't using this
in any resource providers just yet.  I'll add a work item now for that...
2017-03-02 18:15:38 -08:00
joeduffy adf852dd84 Fix an off by one (duhhh) 2017-03-02 17:15:13 -08:00
joeduffy 076d689a05 Rename Monikers to URNs
This change is mostly just a rename of Moniker to URN.  It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.

This is a minor step that helps to prepare us for pulumi/coconut#109.
2017-03-02 17:10:10 -08:00
joeduffy 2ce75cb946 Make security group changes imply replacement 2017-03-02 16:16:18 -08:00
joeduffy 966969945b Add a resource.NewUniqueHex API (and use it)
This change adds a new resource.NewUniqueHex API, that simply generates
a unique hex string with the given prefix, with a specific count of
random bytes, and optionally capped to a maximum length.

This is used in the AWS SecurityGroup resource provider to avoid name
collisions, which is especially important during replacements (otherwise
we cannot possibly create a new instance before deleting the old one).

This resolves pulumi/coconut#108.
2017-03-02 16:02:41 -08:00
joeduffy 523c669a03 Track which updates triggered a replacement
This change tracks which updates triggered a replacement.  This enables
better output and diagnostics.  For example, we now colorize those
properties differently in the output.  This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
2017-03-02 15:24:39 -08:00
joeduffy f0d9b12a3c Don't emit logical step resources while checkpointing 2017-03-02 13:14:57 -08:00
joeduffy bd613a33e6 Make replacement first class
This change, part of pulumi/coconut#105, rearranges support for
resource replacement.  The old model didn't properly account for
the cascading updates and possible replacement of dependencies.

Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion.  This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step.  The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps).  It also makes it
easier to fold all of the edges into a single linchpin node.

As part of this, the update step no longer gets to choose whether
to recreate the resource.  Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
2017-03-02 09:52:08 -08:00
joeduffy df3c0dcb7d Display and colorize replacements distinctly 2017-03-01 13:34:29 -08:00
joeduffy fe0bb4a265 Support replacement IDs
This change introduces a new RPC function to the provider interface;
in pseudo-code:

    UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
        (bool, PropertyMap, error)

Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update.  This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).

The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect).  This ensures the overall DAG / schedule is ordered correctly.

This change is most of pulumi/coconut#105.  The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
2017-03-01 09:08:53 -08:00
joeduffy a4e806a07c Remember old moniker to ID mappings
For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID.  This change implements support for remembering/recovering them.
2017-02-28 17:03:33 -08:00
joeduffy cf2788a254 Allow restarting from partial failures
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new.  This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
2017-02-28 16:09:56 -08:00
joeduffy 6a2edc9159 Ensure configuration round-trips in Huskfiles 2017-02-28 15:43:46 -08:00
joeduffy 1c43abffec Fix some go vet issues 2017-02-28 11:02:33 -08:00
joeduffy 7f0a97a4e3 Print configuration variables; etc.
This change does a few things:

* First and foremost, it tracks configuration variables that are
  initialized, and optionally prints them out as part of the
  prelude/header (based on --show-config), both in a dry-run (plan)
  and in an actual deployment (apply).

* It tidies up some of the colorization and messages, and includes
  nice banners like "Deploying changes:", etc.

* Fix an assertion.

* Issue a new error

      "One or more errors occurred while applying X's configuration"

  just to make it easier to distinguish configuration-specific
  failures from ordinary ones.

* Change config keys to tokens.Token, not tokens.ModuleMember,
  since it is legal for keys to represent class members (statics).
2017-02-28 10:32:24 -08:00
joeduffy d91b04d8f4 Support config maps
This change adds support for configuration maps.

This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation.  There is nothing special with this
code as it could have been authored by a user.  But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).

For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2.  From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":

    prod-west.json:
    {
        "husk": "prod-west",
        "config": {
            "aws:config:region": "us-west-1"
        }
    }

    prod-east.json:
    {
        "husk": "prod-east",
        "config": {
            "aws:config:region": "us-east-2"
        }
    }

Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents.  As a result, the static variable
"region" in the "aws:config" package will have the desired value.

This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns.  Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
2017-02-27 19:43:54 -08:00
joeduffy 9b55505463 Implement AWS security group updates 2017-02-27 13:33:08 -08:00
joeduffy eca5c38406 Fix a handful of update-related issues
* Delete husks if err == nil, not err != nil.

* Swizzle the formatting padding on array elements so that the
  diff modifier + or - binds more tightly to the [N] part.

* Print the un-doubly-indented padding for array element headers.

* Add some additional logging to step application (it helped).

* Remember unchanged resources even when glogging is off.
2017-02-27 11:27:36 -08:00
joeduffy 3bdbf17af2 Rename --show-sames to --show-unchanged
Per Eric's feedback.
2017-02-27 11:08:14 -08:00
joeduffy 09e328e3e6 Extract settings from the correct old/new snapshot 2017-02-27 11:02:39 -08:00
joeduffy afbd40c960 Add a --show-sames flag
This change adds a --show-sames flag to `coco husk deploy`.  This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
2017-02-27 10:58:24 -08:00
joeduffy 88fa0b11ed Checkpoint deployments
This change checkpoints deployments properly.  That is, even in the
face of partial failure, we should keep the huskfile up to date.  This
accomplishes that by tracking the state during plan application.

There are still ways in which this can go wrong, however.  Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
2017-02-27 10:26:44 -08:00
joeduffy 604370f58b Propagate IDs from old to new during updates 2017-02-26 13:36:30 -08:00
joeduffy ace693290f Fix the directionality of delete edges 2017-02-26 12:05:49 -08:00
joeduffy 44783cffb7 Don't overwrite unmarshaled deployment info 2017-02-26 12:00:00 -08:00
joeduffy ff3f2232db Only use args if non-nil 2017-02-26 11:53:02 -08:00
joeduffy 2f60a414c7 Reorganize deployment commands
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit.  This
makes targets ("husks") more of a first class thing.

Namely, you must first initialize a husk before using it:

    $ coco husk init staging
    Coconut husk 'staging' initialized; ready for deployments

Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments.  The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:

    $ ... make some changes ...
    $ coco husk deploy staging
    ... standard deployment progress spew ...

Finally, should you want to teardown an entire environment:

    $ coco husk destroy staging
    ... standard deletion progress spew for all resources ...
    Coconut husk 'staging' has been destroyed!
2017-02-26 11:20:14 -08:00
joeduffy 2fec5f74d5 Make DeepEquals nullary logic match Diff 2017-02-25 18:24:12 -08:00
joeduffy 977b16b2cc Add basic targeting capability
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update.  This simplifies the management of deployment
records, checkpoints, and snapshots.

I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming).  The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:

    nutpack/
        bin/
            Nutpack.json
            ... any other compiled artifacts ...
        husks/
            ... one snapshot per husk ...

For example, if we had a stage and prod husk, we would have:

    nutpack/
        bin/...
        husks/
            prod.json
            stage.json

In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment.  These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.

The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
2017-02-25 09:24:52 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy e0440ad312 Print step op labels 2017-02-24 17:44:54 -08:00
joeduffy b43c374905 Fix a few more things about updates
* Eliminate some superfluous "\n"s.

* Remove the redundant properties stored on AWS resources.

* Compute array diff lengths properly (+1).

* Display object property changes from null to non-null as
  adds; and from non-null to null as deletes.

* Fix a boolean expression from ||s to &&s.  (Bone-headed).
2017-02-24 17:02:02 -08:00
joeduffy 53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy 877fa131eb Detect duplicate object names
This change detects duplicate object names (monikers) and issues a nice
error message with source context include.  For example:

    index.ts(260,22): error MU2006: Duplicate objects with the same name:
        prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group

The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:

    let group1 = new aws.ec2.SecurityGroup("group", { ... });
    let group2 = new aws.ec2.SecurityGroup("group", { ... });
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
2017-02-24 16:03:06 -08:00
joeduffy 14e3f19437 Implement name property in AWS provider/library 2017-02-24 15:41:56 -08:00
joeduffy c120f62964 Redo object monikers
This change overhauls the way we do object monikers.  The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions.  The new approach mixes some amount of "automatic scoping"
plus some "explicit naming."  Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.

Each moniker has four parts:

    <Namespace>::<AllocModule>::<Type>::<Name>

wherein each element is the following:

    <Namespace>     The namespace being deployed into
    <AllocModule>   The module in which the object was allocated
    <Type>          The type of the resource
    <Name>          The assigned name of the resource

The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.

The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created.  In the future, we may wish to
extend this to also track the module under evaluation.  (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)

The <Name> warrants more discussion.  The resource provider is consulted
via a new gRPC method, Name, that fetches the name.  How the provider
does this is entirely up to it.  For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).

This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
2017-02-24 14:50:02 -08:00
joeduffy 9dc75da159 Diff and colorize update outputs
This change implements detailed object diffing for puposes of displaying
(and colorizing) updated properties during an update deployment.
2017-02-23 19:03:22 -08:00
joeduffy c4d1f60a7e Eliminate a superfluous map allocation 2017-02-23 15:05:30 -08:00
joeduffy 86bfe5961d Implement updates
This change is a first whack at implementing updates.

Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order.  For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies).  These are just special cases of the more
general idea of performing DAG operations in dependency order.

Updates must work in terms of this more general notion.  For example:

* It is an error to delete a resource while another refers to it; thus,
  resources are deleted after deleting dependents, or after updating
  dependent properties that reference the resource to new values.

* It is an error to depend on a create a resource before it is created;
  thus, resources must be created before dependents are created, and/or
  before updates to existing resource properties that would cause them
  to refer to the new resource.

Of course, all of this is tangled up in a graph of dependencies.  As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.

To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
2017-02-23 14:56:23 -08:00
joeduffy f00b146481 Echo resource provider outputs
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it.  In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error.  This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
2017-02-22 18:53:36 -08:00
joeduffy 8610a70ca4 Implement stable resource ordering
This change adds custom serialization and deserialization to preserve
the ordering of resources in snapshot files.  Unfortunately, because
Go maps are unordered, the results are scrambled (actually, it turns out,
Go will sort by the key).  An alternative would be to resort the results
after reading in the file, or storing as an array, but this would be a
change to the MuGL file format and is less clear than simply keying each
resource by its moniker as we are doing today.
2017-02-22 17:33:08 -08:00
joeduffy ae99e957f9 Fix a few messages and assertions 2017-02-22 14:43:08 -08:00
joeduffy 9c2013baf0 Implement resource snapshot deserialization 2017-02-22 14:32:03 -08:00
joeduffy 8d71771391 Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly.  This is largely
in preparation for performing deletes and updates of existing environments.

The old way was slightly confusing and made things appear more "magical"
than they actually are.  Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.

The new way is to offer three commands: create, update, and delete.  Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely.  The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.

Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands.  This will print out the plan without
actually performing any opterations.

All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 11:21:26 -08:00
joeduffy d4911ad6f6 Implement snapshot MuGL
This change adds support for serializing snapshots in MuGL, per the
design document in docs/design/mugl.md.  At the moment, it is only
exposed from the `mu plan` command, which allows you to specify an
output location using `mu plan --output=file.json` (or `-o=file.json`
for short).  This serializes the snapshot with monikers, resources,
and so on.  Deserialization is not yet supported; that comes next.
2017-02-21 18:31:43 -08:00
joeduffy 0efb8bdd69 Fix a few things
* Specify MinCount/MaxCount when creating an EC2 instance.  These
  are required properties on the RunInstances API.

* Only attempt to unmarshal egressArray/ingressArray when non-nil.

* Remember the context object on the instanceProvider.

* Move the moniker and object maps into the shared context object.

* Marshal object monikers as the resource IDs to which they refer,
  since monikers are useless on "the other side" of the RPC boundary.
  This ensures that, for example, the AWS provider gets IDs it can use.

* Add some paranoia assertions.
2017-02-20 13:55:09 -08:00
joeduffy 81158d0fc2 Make property logic nil-sensitive
...and add some handy plugin-oriented logging.
2017-02-20 13:27:31 -08:00
joeduffy a9085ece0f Trace plugin STDOUT/STDERR 2017-02-20 12:34:15 -08:00
joeduffy 276b6c253d Implement a basic AWS resource provider
This commit includes a basic AWS resource provider.  Mostly it is just
scaffolding, however, it also includes prototype implementations for EC2
instance and security group resource creation operations.
2017-02-20 11:18:47 -08:00
joeduffy dbd1721ced Properly detect "missing file from PATH" os/exec errors 2017-02-19 12:23:26 -08:00
joeduffy 3618837092 Add plan apply progress reporting 2017-02-19 11:58:04 -08:00
joeduffy 09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy 6c25ff5cba Drive plan application
This moves us one step closer from planning to application (marapongo/mu#21).
Namely, we now drive the right resource provider operations in response to
the plan's steps.  Those providers, however, are still empty shells.
2017-02-18 11:54:24 -08:00
joeduffy 9621aa7201 Implement deletion plans
This change adds a flag to `plan` so that we can create deletion plans:

    $ mu plan --delete

This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
2017-02-18 10:33:36 -08:00
joeduffy 6f42e1134b Create object monikers
This change introduces object monikers.  These are unique, serializable
names that refer to resources created during the execution of a MuIL
program.  They are pretty darned ugly at the moment, but at least they
serve their desired purpose.  I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being.  The names right now are
particularly sensitive to simple refactorings.

This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable.  I'd prefer to do that with a bit of
experience under our belts first.
2017-02-18 10:22:04 -08:00
joeduffy d9ee2429da Begin resource modeling and planning
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.

It contains the following key abstractions:

* resource.Provider is a wrapper around the CRUD operations exposed by
  underlying resource plugins.  It will eventually defer to resource.Plugin,
  which itself defers -- over an RPC interface -- to the actual plugin, one
  per package exposing resources.  The provider will also understand how to
  load, cache, and overall manage the lifetime of each plugin.

* resource.Resource is the actual resource object.  This is created from
  the overall evaluation object graph, but is simplified.  It contains only
  serializable properties, for example.  Inter-resource references are
  translated into serializable monikers as part of creating the resource.

* resource.Moniker is a serializable string that uniquely identifies
  a resource in the Mu system.  This is in contrast to resource IDs, which
  are generated by resource providers and generally opaque to the Mu
  system.  See marapongo/mu#69 for more information about monikers and some
  of their challenges (namely, designing a stable algorithm).

* resource.Snapshot is a "snapshot" taken from a graph of resources.  This
  is a transitive closure of state representing one possible configuration
  of a given environment.  This is what plans are created from.  Eventually,
  two snapshots will be diffable, in order to perform incremental updates.
  One way of thinking about this is that a snapshot of the old world's state
  is advanced, one step at a time, until it reaches a desired snapshot of
  the new world's state.

* resource.Plan is a plan for carrying out desired CRUD operations on a target
  environment.  Each plan consists of zero-to-many Steps, each of which has
  a CRUD operation type, a resource target, and a next step.  This is an
  enumerator because it is possible the plan will evolve -- and introduce new
  steps -- as it is carried out (hence, the Next() method).  At the moment, this
  is linearized; eventually, we want to make this more "graph-like" so that we
  can exploit available parallelism within the dependencies.

There are tons of TODOs remaining.  However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.

This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 12:31:48 -08:00
joeduffy 6b60852c76 Generate Golang Protobuf/gRPC code
This change moves the RPC definitions to the pkg/murpc package,
underneath the proto folder.  This is to ensure that the resulting
Protobufs get a good package name "murpc" that is unique within the
overall toolchain.  It also includes a `generate.sh` script that
can be used to manually regenerate client/server code.  Finally,
we are actually checking in the generated files underneath pkg/murpc.
2017-02-10 09:08:06 -08:00
joeduffy 20452a99e1 Add Protobuf definitions for resource providers 2017-02-10 08:55:26 -08:00
joeduffy 8af53d5f69 Burn the ships
This change deletes a bunch of old legacy code.  I've tagged the
prior changelist as "0.1" in case we need to go back and recover
some of the code (as I expect some of the constraint type logic
and AWS-specific code to come in handy down the road).

🔥 🔥 🔥
2017-01-25 17:32:57 -08:00
joeduffy 729af81e44 Move all cloud switching to mu/x MuPackage
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level.  In fact, the whole code-generation
phase of the compiler was based on it.

In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.

Therefore, most of this stuff can go.  I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision.  For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.

I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc).  But for the time being, it's a distraction
getting the core model running.  I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers.  (Although we won't be using
CloudFormation, some of the name generation code might be useful.)  So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
2017-01-20 09:46:59 -08:00