Commit graph

318 commits

Author SHA1 Message Date
joeduffy d05e7ace91 Ensure we close the plugin host/context
This adds a few missing closes for the plugin host/context.  This
should fix pulumi/lumi#261.  Eventually when we have more robust
nightly test options, and want to spend the time, we should think
about doing more rigorous stress testing that kills processes at
inopportune times and guarantees we don't leak.  I've filed
pulumi/lumi#263 to do that.
2017-06-22 15:18:29 -07:00
joeduffy 8b57310854 Tidy up more lint
This change fixes a few things:

* Most importantly, we need to place a leading "." in the paths
  to Gometalinter, otherwise some sub-linters just silently skip
  the directory altogether.  errcheck is one such linter, which
  is a very important one!

* Use an explicit Gometalinter.json file to configure the various
  settings.  This flips on a few additional linters that aren't
  on by default (line line length checking).  Sadly, a few that
  I'd like to enable take waaaay too much time, so in the future
  we may consider a nightly job (this includes code similarity,
  unused parameters, unused functions, and others that generally
  require global analysis).

* Now that we're running more, however, linting takes a while!
  The core Lumi project now takes 26 seconds to lint on my laptop.
  That's not terrible, but it's long enough that we don't want to
  do the silly "run them twice" thing our Makefiles were previously
  doing.  Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
  to rely on the fact that grep returns 1 on "zero lines".

* Finally, fix the many issues that this turned up.

I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
2017-06-22 12:09:46 -07:00
Luke Hoban a63efc42a3 Propagate errors on deployment failures
We were not propagating the error from `deployLatest` through
to the CLI error result.  Despite out recent efforts to integrate
gometalinter, there were also several additional similar cases of
ignored error results reported by `errcheck`.  Not yet clear why
these are not being reported via gometalinter.

Fixes #262.
2017-06-21 22:02:57 -07:00
joeduffy 7fe8052941 Fix some lint in our lint
After 233c5a8 landed, I noticed there are a few things to be fixed up:

    * Run gometalinter in all the right places.  We need to run both in
      lint and lint_quiet targets.  I've also cleaned up some of the logic
      around what to suppress so there's less repetition.

    * We currently @ meaningful commands, which is unfortunate, since it
      makes debugging Makefiles tough (especially when looking at CI build
      logs).  Going forward, we should only use @ for meaningless commands,
      like @echo.

    * The AWS project wasn't actually running tslint, because it needs to
      say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
      The current script of `tslint lib/aws/pack/...` wasn't actually
      running lint, hence we missed a lot of AWS lint issues.

    * Fix up the issues that these fixes uncovered.  Mostly err shadowing.
2017-06-21 13:24:35 -07:00
joeduffy 97deabb9bd Finish interface for reading configuration¬
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface.  In summary:

    * Instead of using the NullSource for destructions -- which
      doesn't hook up an interpreter and so any reads of configuration
      variables will fail -- we will enlighten the EvalSource to know
      how to orchestrate destruction interpretation.  The primary
      difference is that we don't actually run the code, but *we do*
      perform all of the necessary configuration and variable init.

    * Associate the active interpreter with the plugin context as
      we are executing, so that the host object can actually read the
      state from the heap as requested to do so by attached plugins.

    * Rename anything "engine" related to use the term "host"; this
      avoids introducing unnecesarily new terminology.

    * Add a new pkg/resource/provider/ package where we can begin
      consolidating helper functionality for resource providers.
      Right now, this includes a wrapper interface atop the gRPC
      machinery necessary to contact the host, in addition to a
      Main function that hides some boilerplate entrypoint code.

    * Add a rpcutil.IsBenignCloseErr routine to let us ignore
      "benign" gRPC errors that are knowingly returned at shutdown.

This commit completes pulumi/lumi#117.
2017-06-21 10:31:06 -07:00
joeduffy d7093188f0 Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.

The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible.  We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-20 19:45:07 -07:00
joeduffy 26cf93f759 Implement get functions on all resources
This change implements the `get` function for resources.  Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.

For example, we can now look up a SecurityGroup from its ARN:

    let group = aws.ec2.SecurityGroup.get(
        "arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");

The returned object is a fully functional resource object.  So, we can then
link it up with an EC2 instance, for example, in the usual ways:

    let instance = new aws.ec2.Instance(..., {
        securityGroups: [ group ],
    });

This didn't require any changes to the RPC or provider model, since we
already implement the Get function.

There are a few loose ends; two are short term:

    1) URNs are not rehydrated.
    2) Query is not yet implemented.

One is mid-term:

    3) We probably want a URN-based lookup function.  But we will likely
       wait until we tackle pulumi/lumi#109 before adding this.

And one is long term (and subtle):

    4) These amount to I/O and are not repeatable!  A change in the target
       environment may cause a script to generate a different plan
       intermittently.  Most likely we want to apply a different kind of
       deployment "policy" for such scripts.  These are inching towards the
       scripting model of pulumi/lumi#121, which is an entirely different
       beast than the repeatable immutable infrastructure deployments.

Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-19 17:29:02 -07:00
Luke Hoban 33a9452ece Merge pull request #256 from pulumi/examplestest
Add integration testing for examples
2017-06-16 10:17:51 -07:00
joeduffy 7d19abc2a3 Print the current environment
This change implements showing a summary of the current environment.
All you need to do is run

    $ lumi env

and the current environment's information will be printed.

This makes it convenient to grab resource information that might be
required, for instance, to correlate with logs (e.g., lambda ARNs).

Eventually, as per pulumi/lumi#184, we want to print details about
all of the resources too.
2017-06-16 09:46:09 -07:00
Luke Hoban 8d8eba5c65 Add integration testing for examples
Adds an integration test that runs the following commands on the
AWS webserver example, failing if any command returns an error
code:
* lumijs
* lumi env init
* lumi config
* lumi plan
* lumi deploy
* lumi destroy
* lumi env rm

Also ensures that plan and deploy failures propagate errors through
to error codes at the CLI.
2017-06-16 09:24:31 -07:00
joeduffy 9698309f2b Model resource ID and URN as output properties
This change exposes ID and URN properties on resources, as appropriate,
so that they may be read and used in Lumi scripts.
2017-06-14 17:00:13 -07:00
joeduffy 2ac303f703 Fix deployment hang (pulumi/lumi#246)
The recent change to run the interpreter and planner on separate goroutines
created the need to perform rendezvous-style synchronization between them.
Although the case of an invoked function properly tore down the synchronization
by communicating the error, we seldom directly invoke functions for JavaScript
programs because the way module entrypoint code ends up in initializers.
This requires that we propagate errors correctly out of module and class
initializers, in the standard way, so that the unwind makes its way to the top.

This fixes pulumi/lumi#246.
2017-06-14 15:52:36 -07:00
joeduffy 3a899b304e Fix empty body issues
We recently changed the Resource base type to have no constructor,
rather than a manual empty constructor.  This ought to work just fine.
The LumiJS compiler indeed generates a constructor, however, it is
missing a body and when the interpreter tries to invoke it, we crash
with a nil reference panic.  The runtime actually tolerates missing
constructors entirely, although the way LumiJS binds super calls
doesn't tolerate the missing base constructor.  This change simply
generates such constructors in LumiJS with empty bodies.

In addition, I've added an error that will catch the empty body
problem during binding, since technically speaking, all functions
must have bodies.  (Our runtime happens to support the notion of
"abstract", however, so we only fire the error on concrete functions.)
2017-06-14 10:30:46 -07:00
Luke Hoban 9a0575b518 Allow classes without explicit constrctors
When a class has no constructor, we automatically generate an empty
constructor in the Lumipack.

This allows us to adhere to tslint rule suggesting leaving off empty
constructors with default signatures.
2017-06-13 17:54:45 -07:00
Luke Hoban 282f40d3e3 Merge branch 'master' into bforsyth927-gometalinter 2017-06-13 16:28:12 -07:00
Luke Hoban e915dd3b42 Upgrade LumiJS Typescript dependency to 2.3.4
Fixes #242.
2017-06-13 15:48:14 -07:00
joeduffy 0d836ae0bd Recover from deployment failures 2017-06-13 07:10:13 -07:00
joeduffy 75a2f14d10 Propagate IDs/outs differently based on step kind
This change updates the ID/output propagation logic to properly handle
the case of replacements, in addition to accurately conveying the fact
that an update may change the values of output properties (but not the ID).

Also fixes a formatting issue with the replacement diffing displays.
2017-06-13 07:10:13 -07:00
joeduffy 25c52a04c5 Tidy up some loose ends
This removes some loose ends and reimplements `lumi pack eval`.
2017-06-13 07:10:13 -07:00
joeduffy dd9e6b35f4 Introduce an OpSame planning step
This change introduces an OpSame planning step.  The reason we need
this is so that we can apply the necessary output properties, including
the ID, even as we are simply walking the plan (i.e., when we aren't
actually performing a deployment).  This ensures that the object state
evolves as required to let reads of output properties propagate in the
ways necessary to reproduce past executions of the program.
2017-06-13 07:10:13 -07:00
joeduffy d1414af321 Fix a few minor things; clean stuff up
* Assert new things in new places.

* Log more interesting tidbits during evaluation.

* Invoke the OnStart hook before triggering initializers.

* Tolerate nil prev snapshots during deletion calculation.

* Handle and serialize missing resource IDs as output props.

* Return "done" flag from Rendezvous.Meet.
2017-06-13 07:10:13 -07:00
joeduffy d277dd5800 More progress on pulumi/lumi#90
This change refactors a number of aspects of the CLI's treatment of
steps, in line with the new scheme, and a number of other miscellaneous
and minor fixes.  It also regenerates all RPC code impacted by recent renames.
2017-06-13 07:10:13 -07:00
joeduffy d044720045 Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.

The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.

The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine.  It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.

There are two other sources, however.  First, a nullSource, which simply
refuses to create new objects.  This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment.  Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.

Boatloads of code is now changed and updated in the various CLI commands.

This further chugs along towards pulumi/lumi#90.  The end is in sight.
2017-06-13 07:10:13 -07:00
joeduffy 6b2408e086 Rewrite plans and deployments
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.

The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
2017-06-13 07:10:13 -07:00
joeduffy c53ddeb678 Overhaul resources, planning, and environments
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.

The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages.  This results in:

- pkg/resource: just the core resource data structures.

- pkg/resource/deployment: all planning and deployment logic.

- pkg/resource/environment: all environment, configuration, and
      serialized checkpoint structures and logic.

- pkg/resource/plugin: all dynamically loaded analyzer and
      provider logic, including the actual loading and RPC mechanisms.

This also splits the resource abstraction up.  We now have:

- resource.Resource: a shared interface.

- resource.Object: a resource that is connected to a live object
      that will periodically observe mutations due to ongoing
      evaluation of computations.  Snapshots of its state may be
      taken; however, this is purely a "pre-planning" abstraction.

- resource.State: a snapshot of a resource's state that is frozen.
      In other words, it is no longer connected to a live object.
      This is what will store provider outputs (ID and properties),
      and is what may be serialized into a deployment record.

The branch is in a half-baked state as of this change; more changes
are to come...
2017-06-13 07:10:13 -07:00
Luke Hoban 9bb868191f Add support for template literals in LumiJS
Support for untagged template literals.

Also unblocks a couple of cases where dynamic was not
propogated through the binder correctly.

Fixes #102.
2017-06-09 18:46:09 -07:00
Britton Forsyth 69e4834f63 Merge branch 'master' into gometalinter 2017-06-09 14:34:51 -07:00
Luke Hoban 705c0edbfc Fix lumijs tests
Update baselines for Lumijs tests after change to
emit `TryLoadDynamic` for module-scoped
variable references.
2017-06-08 22:22:55 -07:00
Luke Hoban d77c51ff7f Allow runtime lambda to reference globals.
For lambdas which will execute at runtime,
we want to allow them to reference Node.js
global variables, like `console`.

This change makes Lumijs generated IL
incrementally more dynamic by preferring to
generate `TryLoadDynamic` over `LoadLocation`
for references to global variables (except for
references to imports).

Also introduces `console.log` in LumiJS, though
it is not yet attached to a Lumi global environment.

Fixes #174.
2017-06-08 22:06:41 -07:00
Britton Forsyth 3066fcda78 Implemented suggested edits 2017-06-08 11:44:16 -07:00
Britton Forsyth 7457cadf58 Fixed various additional linting issues 2017-06-08 10:21:17 -07:00
Britton Forsyth 00ade9f28a Fixed some gometalinter issues 2017-06-07 10:52:03 -07:00
joeduffy c7dc3036d7 Finish scrubbing TODOs
This is a final pass over our TODOs, and closes pulumi/lumi#212.
2017-06-06 06:05:35 -07:00
joeduffy db99092334 Implement mapper.Encode "for real"
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment).  It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.

During this, I took the opportunity to also clean up a few other things
that had been bugging me.  Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI.  Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type.  Even better, we
no longer require resource providers to deal with the mapper
infrastructure.  Instead, the `Check` function can simply return an
array of errors.  It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.

As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.

This completes pulumi/lumi#183.
2017-06-05 17:49:00 -07:00
joeduffy 87004a124e Store both input and output properties distinctly
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties.  This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.

This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.

I believe this fixes both #189 and #198.
2017-06-04 19:24:48 -07:00
joeduffy cfaa7c9310 Eliminate use of nonstandard tools
This change eliminates the use of nonstandard tools in our build:

* `go test` automatically uses `GOMAXPROCS` for its parallelism
  setting.  In modern Go, this is now set to the number of processors.
  So, there is no need to set it explicitly using `nproc`.

* We can avoid `realpath` in the `lumijs` executable by making it
  a Node.js file and using a relative require import of main.

* We can avoid `realpath` in our Makefiles by just using `pwd`.
2017-06-03 11:08:09 -07:00
joeduffy 39db4dca63 Also build the Lumi stdlib during make all 2017-06-02 15:26:39 -07:00
Joe Duffy 8bbe89bd75 Makeify more; add a "full build" target (#193)
* Makeify more; add a "full build" target

This change uses make for more of our tree.  Namely, the AWS provider
and LumiJS compilers each now use make to build and/or install them.
Not only does this bring about some consistency to how we build and
test things, but also made it easy to add a full build target:

    $ make all

This target will build, test, and install the core Go tools, the LumiJS
compiler, and the AWS provider, in that order.

Each can be made in isolation, however, which ensures that the inner
loop for those is fast and so that, when it comes to finishing
pulumi/lumi#147, we can easily split them out and make from the top.
2017-06-02 14:26:34 -07:00
joeduffy 43bcbed23d Tidy up project loading for pack commands
There are a few things that annoyed me about the way our CLI works with
directories when loading packages.  For example, `lumi pack info some/pack/dir/`
never worked correctly.  This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
2017-06-02 12:43:04 -07:00
joeduffy b07056ab10 Create a plan plugin host
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior.  This is required to write tests that mock
certain aspects, like loading packages from the filesystem.

In theory, this change incurs zero behavioral changes.
2017-06-01 11:41:24 -07:00
joeduffy 0e5ba9655f Pretty print outputs during planning 2017-06-01 10:52:25 -07:00
joeduffy 7b5f9df917 Make updates work in the face of output properties
This change fixes up a few things so that updates correctly deal
with output properties.  This involves a few things:

    1) All outputs stored on the pre snapshot need to get propagated
       to the post snapshot during planning at various points.  This
       ensures that the diffing logic doesn't need to be special cased
       everywhere, including both the Lumi and the provider sides.

    2) Names are changed to "input" properties (using a new `lumi` tag
       option, `in`).  These are properties that providers are expected
       to know nothing about, which we must treat with care during diffs.

    3) We read back properties, via Get, after doing an Update just like
       we do after performing a Create.  This ensures that if an update
       has a cascading impact on other properties, it will be detected.

    4) Inspecting a change, prior to updating, must be done using the
       computed property set instead of the real one.  This is to avoid
       mutating the resource objects ahead of actually applying a plan,
       which would be wrong and misleading.
2017-06-01 10:09:52 -07:00
joeduffy ae8cefcb20 Print output properties in the CLI
This change skips printing output<T> properties as we perform a
deployment, instead showing the real values inline after the resource
has been created.  (output<T> is still shown during planning, of course.)
2017-06-01 08:37:56 -07:00
joeduffy 87ad371107 Only flow logging to plugins if --logflow
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors.  After this change, we will only flow if
--logflow is passed, e.g. as in

    $ lumi --logtostderr --logflow -v=9 deploy ...
2017-06-01 08:37:56 -07:00
joeduffy 47e242f9a7 Rearrange some deployment logic
This change prepares for integrating more planning and deployment logic
closer to the runtime itself.  For historical reasons, we ended up with these
in the env.go file which really has nothing to do with deployments anymore.
2017-06-01 08:36:43 -07:00
joeduffy 7f98387820 Distinguish between computed and output properties
This change introduces the notion of a computed versus an output
property on resources.  Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan.  Specifically:

* An output property is any property that is populated by the resource
  provider, not code running in the Lumi type system.  Because these
  values aren't available during planning -- since we have not yet
  performed the deployment operations -- they will be latent values in
  our runtime and generally missing at the time of a plan.  This is no
  problem and we just want to avoid marshaling them in inopportune places.

* A computed property, on the other hand, is a different beast altogehter.
  Although true one of these is missing a value -- by virtue of the fact
  that they too are latent values, bottoming out in some manner on an
  output property -- they will appear in serializable input positions.
  Not only must we treat them differently during the RPC handshake and
  in the resource providers, but we also want to guarantee they are gone
  by the time we perform any CRUD operations on a resource.  They are
  purely a planning-time-only construct.
2017-06-01 08:36:43 -07:00
joeduffy ddd63e8788 Permit (and test) complex decorators 2017-06-01 08:32:12 -07:00
joeduffy 7879032e88 Pretty-print attributes in lumi pack info command
This change pretty-prints attribute metadata in `lumi pack info`.
For example:

    package "basic/decorators" {
        dependencies []
        module "index" {
            exports []
            method ".main": ()
            class "TestDecorators" [@basic/decorators:index:classDecorate] {
                property "a" [public, @basic/decorators:index:propertyDecorate]: string
                method "m1" [public, @basic/decorators:index:methodDecorate]: (): string
            }
        }
    }

It also includes support for printing property getters/setters:

    property "p1" [public]: string {
        method "get" [public, @basic/decorators:index:methodDecorate]: (): string
        method "set" [public]: (v: string)
    }
2017-06-01 08:32:12 -07:00
joeduffy acdab34d7a Support decorators in more places
We need to smuggle metadata from the resource IDL all the way through
to the runtime, so that it knows which things are output properties.  In
order to do this, we'll leverage decorators and the support for serializing
them as attributes.  This change adds support for the various kinds
(class, property, method, and parameter), in addition to test cases.
2017-06-01 08:32:12 -07:00
joeduffy d79c41f620 Initial support for output properties (1 of 3)
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.

In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways.  Namely,
operations against Latent<T>s generally produce new Latent<U>s.

During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values.  An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange.  As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values.  My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.

For now, using an unresolved Latent<T> in a conditional will lead
to an error.  See pulumi/lumi#67.  Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support.  That is a missing 1/3.

Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values.  Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
2017-06-01 08:32:12 -07:00
Luke Hoban 8bbf48bf87 Support for AWS DynamoDB Table GlobalSecondaryIndexes
Adds support for global secondary indexes on DynamoDB Tables.

Also adds a HashSet API to the AWS provider library.  This handles part of #178,
providing a standard way for AWS provider implementations to compute set-based
diffs. This new API is used in both aws.dynamodb.Table and aws.elasticbeanstalk.Environment
currently.
2017-05-26 14:54:35 -07:00
Luke Hoban 7f8b1e59c1 Support for lambdas (#158)
Resolves #137.

This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.

A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.

LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.

Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
2017-05-25 16:55:14 -07:00
Luke Hoban 2a036c8693 More CLIDL -> LUMIDL updates 2017-05-18 17:21:08 -07:00
joeduffy ce1dc4e30b Fix an erroneous reference to lumi env deploy 2017-05-18 15:54:40 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy b7f3d447a1 Preserve the lumi prefix on our CLI tools
This change keeps the lumi prefix on our CLI tools.

As @lukehoban pointed out in person, as soon as we do pulumi/coconut#98,
most people (other than compiler authors themselves) won't actually be
typing the commands.  And, furthermore, the commands aren't all that bad.

Eventually I assume we'll want something like `lumi-js`, or
`lumi-js-compiler`, so that binaries are discovered dynamically in a way
that is extensible for future languages.  We can tackle this during #98.
2017-05-18 12:38:58 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy 82e3624ea1 Implement property accessors
This change implements property accessors (getters and setters).

The approach is fairly basic, but is heavily inspired by the ECMAScript5
approach of attaching a getter/setter to any property slot (even if we don't
yet fully exploit this capability).  The evaluator then needs to track and
utilize the appropriate accessor functions when loading locations.

This change includes CocoJS support and makes a dent in pulumi/coconut#66.
2017-05-15 17:46:14 -07:00
joeduffy 6822139406 Add a reference to x variable in test case 2017-05-04 11:04:28 -07:00
joeduffy 0de32db954 Add support for local functions
This change, part of pulumi/coconut#62, adds support for ECMAScript
local functions.  This leverages the recent support for lambdas.
The change also adds some new test cases for the various forms.

Here are some examples of supported forms:

    function outer() {
        // simple named inner function:
        function inner1() { .. };
        // anonymous inner function (just a lambda):
        let inner2 = function() { ... };
        // named and bound inner function:
        let inner3 = function inner4() { ... };
    }

These merely compile into lambdas that have been bound to local
variables with the appropriate names.
2017-05-04 10:57:26 -07:00
joeduffy fde88b7cf4 Permit Statements in SequenceExpressions
The previous shape of SequenceExpression only permitted expressions
in the sequence.  This is pretty common in most ILs, however, it usually
leads to complicated manual spilling in the event that a statement is needed.
This is often necessary when, for example, a compiler is deeply nested in some
expression production, and then realizes the code expansion requires a
statement (e.g., maybe a new local variable must be declared, etc).

Instead of requiring complicated code-gen, this change permits SequenceExpression
to contain an arbitrary mixture of expression/statement prelude nodes, terminating
with a single, final Expression which yields the actual expression value.  The
runtime bears the burden of implementing this which, frankly, is pretty trivial.
2017-05-04 10:54:07 -07:00
joeduffy 748432299a Implement lambdas in CocoJS
This change recognizes and emits lambdas correctly in CocoJS (as part
of pulumi/coconut#62).  The existing CocoIL representation for lambdas
worked just fine for functions, lambdas, and local functions.  There
still isn't runtime support, but that comes next.
2017-05-04 10:01:05 -07:00
joeduffy 4e5140251b Implement support for computed property initializers
I've tripped over pulumi/coconut#141 a few times now, particularly with
the sort of dynamic payloads required when creating lambdas and API gateways.
This change implements support for computed property initializers.
2017-05-01 17:11:57 -07:00
joeduffy 5c156a43cf Permit missing symbols in more places 2017-05-01 10:11:20 -07:00
joeduffy 815aa26282 Improve a contract.fail error message 2017-05-01 09:46:59 -07:00
joeduffy 553462bbfd Lower level for transformSourceFile logging
This changes the CocoJS log-level for logging about transforming a file
so that it shows up in --verbose logging.
2017-05-01 09:45:09 -07:00
joeduffy 69df382da9 Update the CIDLC README with build, running, etc. instructions 2017-04-30 08:36:57 -07:00
joeduffy 954d594e94 Rename --recurse to --recursive
My muscle memory kicked in (grep, et al), and then I realized the
name wasn't quite right.  This rights a wrong.
2017-04-28 10:37:05 -07:00
joeduffy af3949509a Implement CIDLC support for package imports
This change correctly implements package/module resolution in CIDLC.
For now, this only works for intra-package imports, which is sufficient
for now.  Eventually we will need to support this (see pulumi/coconut#138).
2017-04-28 10:31:18 -07:00
joeduffy 46227870e4 Implement a few CIDLC improvements
* Allow `interface{}` to mean "weakly typed property bag."

* Allow slices in IDL types.

* Permit the package base as an argument.
2017-04-27 15:40:51 -07:00
joeduffy 3f54c672be Fix/alter a few aspects of RPC code-generation
* Use --out-rpc, rather than --out-provider, since rpc/ is a peer to provider/.

* Use strongly typed tokens in more places.

* Append "rpc" to the generated RPC package names to avoid conflicts.

* Change the Check function to return []mapper.FieldError, rather than
  mapper.DecodeError, to make the common "no errors" case easier (and to eliminate
  boilerplate resulting in needing to conditionally construct a mapper.DecodeError).

* Rename the diffs argument to just diff, matching the existing convention.

* Automatically detect changes to "replaces" properties in the PreviewUpdate
  function.  This eliminates tons of boilerplate in the providers and handles the
  90% common case for resource recreation.  It's still possible to override the
  PreviewUpdate logic, of course, in case there is more sophisticated recreation
  logic necessary than just whether a property changed or not.

* Add some comments on some generated types.

* Generate property constants for the names as they will appear in weakly typed
  property bags.  Although the new RPC interfaces are almost entirely strongly
  typed, in the event that diffs must be inspected, this often devolves into using
  maps and so on.  It's much nicer to say `if diff.Changed(SecurityGroup_Description)`
  than `if diff.Changed("description")` (and catches more errors at compile-time).

* Fix resource ID generation logic to properly fetch the Underlying() type on
  named types (this would sometimes miss resources during property analysis, emitting
  for example `*VPC` instead of `*resource.ID`).
2017-04-27 10:36:22 -07:00
joeduffy 507a2609a7 Add an initial implementation of CIDLC
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.

I've been kicking the tires with this locally enough to checkpoint the
current version.  There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
2017-04-25 15:05:51 -07:00
joeduffy aa44b46608 Lower instanceof in CocoJS; implement IsInst in CocoIL 2017-04-20 17:38:15 -07:00
joeduffy 94e072c653 Add a TryLoadDynamicExpression IL opcode
This change introduces TryLoadDynamicExpression.  This is similar to
the existing LoadDynamicExpression opcode, except that it will return
null in response to a missing member (versus the default of raising
an exception).  This is to enable languages like JavaScript to encode
operations properly (which always yields undefined/nulls), while still
catering to languages like Python (which throw exceptions).
2017-04-19 16:49:59 -07:00
joeduffy 0977477f95 Improve an assertion message 2017-04-19 15:23:05 -07:00
joeduffy f429bc6a0c Use github.com/pkg/errors for errors
This change moves us over to the github.com/pkg/errors package to
encourage the addition of more context associated with failures.
2017-04-19 14:46:50 -07:00
joeduffy 847d74c9f6 Implement rudimentary decorator support
This change introduces decorator support for CocoJS and the corresponding
IL/AST changes to store them on definition nodes.  Nothing consumes these
at the moment, however, I am looking at leveraging this to indicate that
certain program fragments are "code" and should be serialized specially
(in support of Functions-as-lambdas).
2017-04-18 16:53:26 -07:00
joeduffy 4989e70425 Upgrade to TypeScript 2.2.2 2017-04-18 15:57:13 -07:00
joeduffy 5516ab64bf Quote property values
This uses %q to quote property values when printing them.  This ensures
that control characters are escaped (like \n), in addition to replacing any
unprintable characters with the appropriate escape sequence.  Both show up
nicer in the output for planning commands, etc.
2017-04-17 12:02:42 -07:00
joeduffy f329df599a Add a cmd/cocogo tool
This change introduces the scaffolding for a new cmd/cocogo tool,
as part of pulumi/coconut#133.  The idea here is to do some very
rudimentary code-gen on a subset of Go, to ease the task of writing
providers.  The README describes this in more detail.  Eventually
this will presumably expand to being a peer language to CocoPy,
etc., in that real code can be written, but for now it's mostly IDL.

At the moment, the tool really doesn't do anything useful, other
than loading up, parsing, semantically validating, and spewing
some information about the Go packages passed at the command line.
2017-04-13 05:29:19 -07:00
joeduffy 6b4cab557f Refactor glog init swizzle to a shared package 2017-04-13 05:27:45 -07:00
joeduffy ae1e43ce5d Refactor shared command bits into pkg/cmdutil
This paves the way for more Go-based command line tools that can
share some of the common utility functions around diagnostics and
exit codes.
2017-04-12 11:12:25 -07:00
joeduffy 0af93f0989 Rearrange a little bit of the Coconut cmd scaffolding 2017-04-12 11:04:04 -07:00
joeduffy 9d7bbcfa78 Restructure source layout for tools
This change restructures the overall structure for commands so that
all top-level tools are in the cmd/ directory, alongside the primary
coco command.  This is more "idiomatic Go" in its layout, and makes
room for additional command line tools (like cocogo for IDL).
2017-04-12 10:38:12 -07:00
joeduffy e96d4018ae Switch to imports as statements
The old model for imports was to use top-level declarations on the
enclosing module itself.  This was a laudible attempt to simplify
matters, but just doesn't work.

For one, the order of initialization doesn't precisely correspond
to the imports as they appear in the source code.  This could incur
some weird module initialization problems that lead to differing
behavior between a language and its Coconut variant.

But more pressing as we work on CocoPy support, it doesn't give
us an opportunity to dynamically bind names in a correct way.  For
example, "import aws" now needs to actually translate into a variable
declaration and assignment of sorts.  Furthermore, that variable name
should be visible in the environment block in which it occurs.

This change switches imports to act like statements.  For the most
part this doesn't change much compared to the old model.  The common
pattern of declaring imports at the top of a file will translate to
the imports happening at the top of the module's initializer.  This
has the effect of initializing the transitive closure just as it
happened previously.  But it enables alternative models, like imports
inside of functions, and -- per the above -- dynamic name binding.
2017-04-08 18:16:10 -07:00
joeduffy f773000ef9 Implement dynamic loads from the environment¬
This rearranges the way dynamic loads work a bit.  Previously, they¬
required an object, and did a dynamic lookup in the object's property¬
map.  For real dynamic loads -- of the kind Python uses, obviously,¬
but also ECMAScript -- we need to search the "environment".

This change searches the environment by looking first in the lexical¬
scope in the current function.  If a variable exists, we will use it.¬
If that misses, we then look in the module scope.  If a variable exists¬
there, we will use it.  Otherwise, if the variable is used in a non-lval
position, an dynamic error will be raised ("name not declared").  If
an lval, however, we will lazily allocate a slot for it.

Note that Python doesn't use block scoping in the same way that most
languages do.  This behavior is simply achieved by Python not emitting
any lexically scoped blocks other than at the function level.

This doesn't perfectly achieve the scoping behavior, because we don't
yet bind every name in a way that they can be dynamically discovered.
The two obvious cases are class names and import names.  Those will be
covered in a subsequent commit.

Also note that we are getting lucky here that class static/instance
variables aren't accessible in Python or ECMAScript "ambiently" like
they are in some languages (e.g., C#, Java); as a result, we don't need
to introduce a class scope in the dynamic lookup.  Some day, when we
want to support such languages, we'll need to think about how to let
languages control the environment probe order; for instance, perhaps
the LoadDynamicExpression node can have an "environment" property.
2017-04-08 16:47:15 -07:00
joeduffy 9c1ea1f161 Fix some poor hygiene
A few linty things crept in; this addresses them.
2017-04-08 07:44:02 -07:00
joeduffy d6fd6c244a Add the ability to output a plan as a DOT
We already had the ability to manually execute a CocoPack and generate
a DOT from its object graph.  However, for demo purposes we also want
to be able to generate one from the plan.  This adds a --dot flag to plan.
2017-03-23 08:10:33 -07:00
joeduffy 662404c1cb Require delete confirmations to match env name
This changes from "yes" to requiring an exact match of the
environment name, as is common in CLI tools like this.
2017-03-23 07:36:27 -07:00
joeduffy 3d74eac67d Make major commands more pleasant
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment.  It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!

In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."

By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)

The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag.  All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.

As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
2017-03-21 19:23:32 -07:00
joeduffy 5d14430121 Don't count replacement steps unless explicitly requested 2017-03-15 16:56:23 -07:00
joeduffy e091bde692 Add a plan command; move env destroy to just destroy
This change adds a `coco plan` command which is simply a shortcut
to the more verbose `coco deploy --dry-run`.  This will make demos
flow nicer and elevates planning, an important activity, to a more
prominent position.  The `--dry-run` (aka `-n`) flag is still there.

This change also renames `coco env destroy` to just `coco destroy`.
This is consistent with deploy and plan being at the top-level.  We
now use `coco env` purely for evironment management commands (init,
config, rm, etc).
2017-03-15 15:40:06 -07:00
joeduffy fe1a32c086 Eliminate "fatal" from basic error messages
The word "fatal" makes it look like Coconut did something wrong, when in fact,
these messages are used to convey mis-usage of the command/argument/etc.
2017-03-15 12:16:17 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 90d3d4dd80 Only queue up analyzers if !delete 2017-03-13 07:07:50 -07:00
joeduffy 5dc252053a Fix a slight diffing formatting bug 2017-03-11 10:43:42 -08:00
joeduffy 705880cb7f Add the ability to specify analyzers
This change adds the ability to specify analyzers in two ways:

1) By listing them in the project file, for example:

        analyzers:
            - acmecorp/security
            - acmecorp/gitflow

2) By explicitly listing them on the CLI, as a "one off":

        $ coco deploy <env> \
            --analyzer=acmecorp/security \
            --analyzer=acmecorp/gitflow

This closes out pulumi/coconut#119.
2017-03-11 10:07:34 -08:00
joeduffy 45064d6299 Add basic analyzer support
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119.  In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource.  The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules).  These run simultaneous to overall checking.

Analyzers are loaded as plugins just like providers are.  The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.

This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
2017-03-10 23:49:17 -08:00
joeduffy 361eb62e7b Move coco env deploy to the top-level, coco deploy
Deployments are central to the entire system; although technically
a deployment is indeed associated with an environment, the deployment
is the focus, not the environment, so it makes sense to put the
deployment command at the top-level.

Before, you'd say:

    $ coco env deploy production

And now, you will say:

    $ coco deploy production
2017-03-10 13:17:55 -08:00
joeduffy 783f9534c8 Add the ability to specify and env config during eval
This adds the --config-env flag which can be used to apply configuration
before performing evaluation of a package.
2017-03-09 15:52:50 +00:00
joeduffy cbf5407a53 Print results only if non-nil, during eval 2017-03-09 15:45:24 +00:00
joeduffy bfee271087 Rename the coco nut command to coco pack 2017-03-09 15:43:28 +00:00
joeduffy 9f524e4c8c Organize all package management commands
This change organizes all package management commands underneath
the top-level subcommand `nut`; so, for example:

    $ nut get ...
    $ nut eval ...
    and so on
2017-03-08 11:41:13 +00:00
joeduffy 3b3b56a836 Properly reap child processes
This change reaps child plugin processes before exiting.  It also hardens
some of the exit paths to avoid os.Exiting from the middle of a callstack.
2017-03-07 13:47:42 +00:00
joeduffy d94f9d4768 Implement a very basic env config command
This change implements a very basic `coco env config` command, that
lets you read, set, or unset configuration values for an environment.

For a single environment, these four usage styles are supported:

    # query all values in a given environment <env>:
    $ coco env config <env>

    # query a single value with key <key> in a given environment <env>:
    $ coco env config <env> <key>

    # set a single value with key <key> and value <value> in <env>:
    $ coco env config <env> <key> <value>

    # unset a single value with key <key> in the environment <env>:
    $ coco env config <env> <key> --unset

This is a vast subset of pulumi/coconut#113.
2017-03-06 15:07:24 +00:00
joeduffy 86dc13ed5b More term rotations
This changes a few naming things:

* Rename "husk" to "environment" (`coco env` for short).

* Rename NutPack/NutIL to CocoPack/CocoIL.

* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.

* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.

* Rename the package asset directory from nutpack/ to .coconut/.
2017-03-06 14:32:39 +00:00
joeduffy 6194a59798 Add a pre-pass to validate resources before creating/updating
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties.  This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late).  My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.

This resolves pulumi/coconut#107, although we aren't using this
in any resource providers just yet.  I'll add a work item now for that...
2017-03-02 18:15:38 -08:00
joeduffy 076d689a05 Rename Monikers to URNs
This change is mostly just a rename of Moniker to URN.  It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.

This is a minor step that helps to prepare us for pulumi/coconut#109.
2017-03-02 17:10:10 -08:00
joeduffy 341c30f0c8 Issue deploy errors in the after callback
This just orders the output more nicely; previously, "step #n failed"
would come *before* the error detailing the reason.  This was a bit
confusing.  This change reorders them so the error reads more naturally.
2017-03-02 15:46:14 -08:00
joeduffy 523c669a03 Track which updates triggered a replacement
This change tracks which updates triggered a replacement.  This enables
better output and diagnostics.  For example, we now colorize those
properties differently in the output.  This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
2017-03-02 15:24:39 -08:00
joeduffy e3715ef836 Add some handy aliases for deploy and init 2017-03-02 11:50:29 -08:00
joeduffy bd613a33e6 Make replacement first class
This change, part of pulumi/coconut#105, rearranges support for
resource replacement.  The old model didn't properly account for
the cascading updates and possible replacement of dependencies.

Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion.  This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step.  The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps).  It also makes it
easier to fold all of the edges into a single linchpin node.

As part of this, the update step no longer gets to choose whether
to recreate the resource.  Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
2017-03-02 09:52:08 -08:00
joeduffy df3c0dcb7d Display and colorize replacements distinctly 2017-03-01 13:34:29 -08:00
joeduffy f93e093ab3 Unify some CLI error reporting
This unifies some of the CLI error reporting logic.  It's still
not perfect, but this tidies up some minor issues that were starting
to annoy me (e.g., inconsistencies in message formatting, message
colorization, and exit code handling).
2017-03-01 10:09:27 -08:00
joeduffy 49f5f3debc Add a distinct husk rm command
This changes the workflow for destroying a husk slightly.  Rather than
`coco husk destroy` actually removing the husk and its associated config
information, `coco husk destroy` just destroys the resources.  Then,
afterwards, to permanently remove the husk, you use `coco husk rm`.

As usual with `rm`-style commands, it refues to remove the husk if there
are any resources still associated with it; however, `--force` overrides
this default.
2017-03-01 09:57:14 -08:00
joeduffy fe0bb4a265 Support replacement IDs
This change introduces a new RPC function to the provider interface;
in pseudo-code:

    UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
        (bool, PropertyMap, error)

Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update.  This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).

The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect).  This ensures the overall DAG / schedule is ordered correctly.

This change is most of pulumi/coconut#105.  The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
2017-03-01 09:08:53 -08:00
joeduffy a4e806a07c Remember old moniker to ID mappings
For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID.  This change implements support for remembering/recovering them.
2017-02-28 17:03:33 -08:00
joeduffy 7f53727575 Require the full --yes for destroys 2017-02-28 16:44:46 -08:00
joeduffy 632bb357da Remove superfluous indentation 2017-02-28 16:30:20 -08:00
joeduffy cf2788a254 Allow restarting from partial failures
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new.  This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
2017-02-28 16:09:56 -08:00
joeduffy 6a2edc9159 Ensure configuration round-trips in Huskfiles 2017-02-28 15:43:46 -08:00
joeduffy 300f87137c Improve verify; verify packages before install
This change improves the verify command by unifying its package
discovery logic with compile.  All libraries are also now verified
before installing, just to catch silly mistakes (compiler bugs, etc).

This also fixes a verification error in the AWS library due to
pulumi/coconut#104, the inability to use `!` on "anything".
2017-02-28 12:31:50 -08:00
joeduffy 7f0a97a4e3 Print configuration variables; etc.
This change does a few things:

* First and foremost, it tracks configuration variables that are
  initialized, and optionally prints them out as part of the
  prelude/header (based on --show-config), both in a dry-run (plan)
  and in an actual deployment (apply).

* It tidies up some of the colorization and messages, and includes
  nice banners like "Deploying changes:", etc.

* Fix an assertion.

* Issue a new error

      "One or more errors occurred while applying X's configuration"

  just to make it easier to distinguish configuration-specific
  failures from ordinary ones.

* Change config keys to tokens.Token, not tokens.ModuleMember,
  since it is legal for keys to represent class members (statics).
2017-02-28 10:32:24 -08:00
joeduffy d91b04d8f4 Support config maps
This change adds support for configuration maps.

This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation.  There is nothing special with this
code as it could have been authored by a user.  But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).

For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2.  From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":

    prod-west.json:
    {
        "husk": "prod-west",
        "config": {
            "aws:config:region": "us-west-1"
        }
    }

    prod-east.json:
    {
        "husk": "prod-east",
        "config": {
            "aws:config:region": "us-east-2"
        }
    }

Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents.  As a result, the static variable
"region" in the "aws:config" package will have the desired value.

This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns.  Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
2017-02-27 19:43:54 -08:00
joeduffy 371a847eb9 Unify a bit of command logic, and hoist some failure modes 2017-02-27 14:13:27 -08:00
joeduffy 73babc13a0 Add confirmation for destroy 2017-02-27 13:53:15 -08:00
joeduffy eca5c38406 Fix a handful of update-related issues
* Delete husks if err == nil, not err != nil.

* Swizzle the formatting padding on array elements so that the
  diff modifier + or - binds more tightly to the [N] part.

* Print the un-doubly-indented padding for array element headers.

* Add some additional logging to step application (it helped).

* Remember unchanged resources even when glogging is off.
2017-02-27 11:27:36 -08:00
joeduffy 3bdbf17af2 Rename --show-sames to --show-unchanged
Per Eric's feedback.
2017-02-27 11:08:14 -08:00
joeduffy afbd40c960 Add a --show-sames flag
This change adds a --show-sames flag to `coco husk deploy`.  This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
2017-02-27 10:58:24 -08:00
joeduffy 88fa0b11ed Checkpoint deployments
This change checkpoints deployments properly.  That is, even in the
face of partial failure, we should keep the huskfile up to date.  This
accomplishes that by tracking the state during plan application.

There are still ways in which this can go wrong, however.  Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
2017-02-27 10:26:44 -08:00
joeduffy d3ce3cd9c6 Implement a coco husk ls command
This command is handy for development, so I whipped up a quick implementation.
All it does is print all known husks with their associated deployment time
and resource count (if any, or "n/a" for initialized husks with no deployments).
2017-02-26 13:06:33 -08:00
joeduffy 44783cffb7 Don't overwrite unmarshaled deployment info 2017-02-26 12:00:00 -08:00
joeduffy 2116d87f7d Tidy up some messages and error paths 2017-02-26 11:52:44 -08:00
joeduffy 2f60a414c7 Reorganize deployment commands
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit.  This
makes targets ("husks") more of a first class thing.

Namely, you must first initialize a husk before using it:

    $ coco husk init staging
    Coconut husk 'staging' initialized; ready for deployments

Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments.  The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:

    $ ... make some changes ...
    $ coco husk deploy staging
    ... standard deployment progress spew ...

Finally, should you want to teardown an entire environment:

    $ coco husk destroy staging
    ... standard deletion progress spew for all resources ...
    Coconut husk 'staging' has been destroyed!
2017-02-26 11:20:14 -08:00
joeduffy b3859bd78f Use 0755, rather than 0744, for directories 2017-02-25 10:36:28 -08:00
joeduffy 977b16b2cc Add basic targeting capability
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update.  This simplifies the management of deployment
records, checkpoints, and snapshots.

I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming).  The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:

    nutpack/
        bin/
            Nutpack.json
            ... any other compiled artifacts ...
        husks/
            ... one snapshot per husk ...

For example, if we had a stage and prod husk, we would have:

    nutpack/
        bin/...
        husks/
            prod.json
            stage.json

In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment.  These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.

The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
2017-02-25 09:24:52 -08:00
joeduffy 14762df98b Flip the summarization polarity
This change shows detailed output -- resources, their properties, and
a full articulation of plan steps -- and permits summarization with the
--summary (or -s) flag.
2017-02-25 07:55:22 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy e0440ad312 Print step op labels 2017-02-24 17:44:54 -08:00
joeduffy b43c374905 Fix a few more things about updates
* Eliminate some superfluous "\n"s.

* Remove the redundant properties stored on AWS resources.

* Compute array diff lengths properly (+1).

* Display object property changes from null to non-null as
  adds; and from non-null to null as deletes.

* Fix a boolean expression from ||s to &&s.  (Bone-headed).
2017-02-24 17:02:02 -08:00
joeduffy 53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy 877fa131eb Detect duplicate object names
This change detects duplicate object names (monikers) and issues a nice
error message with source context include.  For example:

    index.ts(260,22): error MU2006: Duplicate objects with the same name:
        prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group

The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:

    let group1 = new aws.ec2.SecurityGroup("group", { ... });
    let group2 = new aws.ec2.SecurityGroup("group", { ... });
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
2017-02-24 16:03:06 -08:00
joeduffy c120f62964 Redo object monikers
This change overhauls the way we do object monikers.  The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions.  The new approach mixes some amount of "automatic scoping"
plus some "explicit naming."  Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.

Each moniker has four parts:

    <Namespace>::<AllocModule>::<Type>::<Name>

wherein each element is the following:

    <Namespace>     The namespace being deployed into
    <AllocModule>   The module in which the object was allocated
    <Type>          The type of the resource
    <Name>          The assigned name of the resource

The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.

The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created.  In the future, we may wish to
extend this to also track the module under evaluation.  (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)

The <Name> warrants more discussion.  The resource provider is consulted
via a new gRPC method, Name, that fetches the name.  How the provider
does this is entirely up to it.  For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).

This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
2017-02-24 14:50:02 -08:00
joeduffy 9dc75da159 Diff and colorize update outputs
This change implements detailed object diffing for puposes of displaying
(and colorizing) updated properties during an update deployment.
2017-02-23 19:03:22 -08:00
joeduffy 86bfe5961d Implement updates
This change is a first whack at implementing updates.

Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order.  For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies).  These are just special cases of the more
general idea of performing DAG operations in dependency order.

Updates must work in terms of this more general notion.  For example:

* It is an error to delete a resource while another refers to it; thus,
  resources are deleted after deleting dependents, or after updating
  dependent properties that reference the resource to new values.

* It is an error to depend on a create a resource before it is created;
  thus, resources must be created before dependents are created, and/or
  before updates to existing resource properties that would cause them
  to refer to the new resource.

Of course, all of this is tangled up in a graph of dependencies.  As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.

To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
2017-02-23 14:56:23 -08:00
joeduffy f00b146481 Echo resource provider outputs
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it.  In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error.  This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
2017-02-22 18:53:36 -08:00
joeduffy 2088eef6e9 Provision security group ingress/egress rules 2017-02-22 18:10:36 -08:00
joeduffy ae99e957f9 Fix a few messages and assertions 2017-02-22 14:43:08 -08:00
joeduffy 9c2013baf0 Implement resource snapshot deserialization 2017-02-22 14:32:03 -08:00
joeduffy 8d71771391 Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly.  This is largely
in preparation for performing deletes and updates of existing environments.

The old way was slightly confusing and made things appear more "magical"
than they actually are.  Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.

The new way is to offer three commands: create, update, and delete.  Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely.  The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.

Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands.  This will print out the plan without
actually performing any opterations.

All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 11:21:26 -08:00