For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID. This change implements support for remembering/recovering them.
This change eliminates the hard-coded region from the ec2instance
example, and instead uses the new `aws.config.region` configuration
variable. This makes the code more amenable to multi-instancing.
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new. This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
This change adds a requireRegion helper function to the AWS library,
enabling easy fetching of the current region (and/or throwing if the
region hasn't been properly configured).
This change improves the verify command by unifying its package
discovery logic with compile. All libraries are also now verified
before installing, just to catch silly mistakes (compiler bugs, etc).
This also fixes a verification error in the AWS library due to
pulumi/coconut#104, the inability to use `!` on "anything".
The TypeScript compiler will often transform Anys into Nevers when it
thinks control flow cannot reach a certain point. This isn't handled
gracefully at the moment. This change just erases it back to Any.
For now, we will disable the CocoJS library's ECMAScript runtime
layer, because it's causing some troubles. The details -- and
reenabling it -- are tracked in pulumi/coconut#103.
* for ..of isn't yet supported yet (see pulumi/coconut#88); in its
absence, use a regular for loop so that this library can compile.
* List coconut as a dependency.
This change does a few things:
* First and foremost, it tracks configuration variables that are
initialized, and optionally prints them out as part of the
prelude/header (based on --show-config), both in a dry-run (plan)
and in an actual deployment (apply).
* It tidies up some of the colorization and messages, and includes
nice banners like "Deploying changes:", etc.
* Fix an assertion.
* Issue a new error
"One or more errors occurred while applying X's configuration"
just to make it easier to distinguish configuration-specific
failures from ordinary ones.
* Change config keys to tokens.Token, not tokens.ModuleMember,
since it is legal for keys to represent class members (statics).
This change adds support for configuration maps.
This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation. There is nothing special with this
code as it could have been authored by a user. But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).
For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2. From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":
prod-west.json:
{
"husk": "prod-west",
"config": {
"aws:config:region": "us-west-1"
}
}
prod-east.json:
{
"husk": "prod-east",
"config": {
"aws:config:region": "us-east-2"
}
}
Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents. As a result, the static variable
"region" in the "aws:config" package will have the desired value.
This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns. Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
* Delete husks if err == nil, not err != nil.
* Swizzle the formatting padding on array elements so that the
diff modifier + or - binds more tightly to the [N] part.
* Print the un-doubly-indented padding for array element headers.
* Add some additional logging to step application (it helped).
* Remember unchanged resources even when glogging is off.
This change adds a --show-sames flag to `coco husk deploy`. This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
This change checkpoints deployments properly. That is, even in the
face of partial failure, we should keep the huskfile up to date. This
accomplishes that by tracking the state during plan application.
There are still ways in which this can go wrong, however. Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
This command is handy for development, so I whipped up a quick implementation.
All it does is print all known husks with their associated deployment time
and resource count (if any, or "n/a" for initialized husks with no deployments).
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit. This
makes targets ("husks") more of a first class thing.
Namely, you must first initialize a husk before using it:
$ coco husk init staging
Coconut husk 'staging' initialized; ready for deployments
Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments. The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:
$ ... make some changes ...
$ coco husk deploy staging
... standard deployment progress spew ...
Finally, should you want to teardown an entire environment:
$ coco husk destroy staging
... standard deletion progress spew for all resources ...
Coconut husk 'staging' has been destroyed!
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update. This simplifies the management of deployment
records, checkpoints, and snapshots.
I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming). The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:
nutpack/
bin/
Nutpack.json
... any other compiled artifacts ...
husks/
... one snapshot per husk ...
For example, if we had a stage and prod husk, we would have:
nutpack/
bin/...
husks/
prod.json
stage.json
In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment. These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.
The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
This change shows detailed output -- resources, their properties, and
a full articulation of plan steps -- and permits summarization with the
--summary (or -s) flag.