If you use --cloud-url, as in
$ pulumi stack init foo --cloud-url https://x.io
we would silently fall back to logic that creates local stacks.
I realize all of this will get better with the new stack identity
model, however in the meantime, let's infer that the user wanted
--remote when --cloud-url is non-"".
When run without a `--plaintext` or `--secret` argument, `pulumi
config` warns that the value is stored unecrypted and that you can
pass `--secret` to encrypt it. Now, we also mention that `--plaintext`
can be pased explicity on the command line to avoid the warning.
Fixes#752
Note: This is a minor issue that I didn't get to for M11 that isn't
required for M11 and would be fine merging for post-M11.
When you specify a template name explicitly (e.g.
`pulumi new typescript`), we'll try to download the template tarball
without first downloading the JSON list of available templates. The JSON
includes a description used when replacing the `${DESCRIPTION}` string
in template files. Since we didn't download the JSON, we won't have a
description, so we fallback to a default value (`"A Pulumi project."`).
This also happens when specifying `--offline` to use an existing
template under `~/.pulumi/templates`; we won't have a description for
the template, so we fallback to a default description. The fallback
value happens to be the same as the description for each of our current
templates, so noone will currently notice an issue.
For M11, I included initial support for a template manifest file where
the description (and any future metadata) could be stored, but didn't go
as far as actually reading the file.
This change makes it so the CLI actually reads the description from the
manifest file (if it exists), otherwise falling back to the default
value as is done currently. Some minor related cleanup is included in
this change.
This fixes two different DST bugs in the logs test, one of which
led to the failure in pulumi/pulumi#1041:
1) The assertion was using ANSIC formatting, which does not contain
a time-zone, and because we are now in DST, the resulting time
didn't parse and format in a round-trippable way. Instead, let's
use RFC3339, and let's be explicit about the timezone.
2) The parseSince("1m30s") would in theory fail if the DST changed
at just the right moment, since the reference time could end up
different from the reference time that the test is using. Let's
expose the reference time to the caller, so that they may decide
how to deal with these situations, and let's use UTC here.
This adds a `pulumi new` command which makes it easy to quickly
automatically create the handful of needed files to get started building
an empty Pulumi project.
Usage:
```
$ pulumi new typescript
```
Or you can leave off the template name, and it will ask you to choose
one:
```
$ pulumi new
Please choose a template:
> javascript
python
typescript
```
The engine now emits events with richer metadata during the
ResourceOutputs and ResourcePre callbacks. The CLI can then use this
information to decide if it should display the event or not and how
much of the event to display.
Options dealing with what to display and how to display it have moved
into the CLI and the engine now emits all information for each event.
The engine now unconditionally emits a new type of event, a
PreludeEvent, which contains the configuration for a stack as well as
an indication if the stack is being previewed or updated. The
responsibility for interpreting the --show-config flag on the command
line is now handled by the CLI, which uses this to decide if it should
print the configuration or not, and then writes the "Previewing
changes" or "Deploying chanages" header.
This helper method is only really used for testing, but we should not
allow it to create a Key who's namespace has a colon (as ParseKey
would not build something like this).
config.Key has become a pair of namespace and name. Because the whole
world has not changed yet, there continues to be a way to convert
between a tokens.ModuleMember and config.Key, however now sometime the
conversion from tokens.ModuleMember can fail (when the module member
is not of the form `<package>:config:<name>`).
Right now, config.Key is a type alias for tokens.ModuleMember. I did a
pass over the codebase such that we use config.Key everywhere it
looked like the value did not leak to some external process (e.g a
resource provider or a langhost).
Doing this makes it a little clearer (hopefully) where code is
depending on a module member structure (e.g. <package>:config:<value>)
instead of just an opaque type.
As it stands, we only configure those providers for which configuration
is present. This can lead to surprising failure modes if those providers
are then used to create resources. These changes ensure that all
resource providers that are not configured during plan initialization
are configured upon first load.
Fixes#758.
By using untyped deployment structures via `json.RawMessage`, we can
support round-tripping between old CLI clients and newer servers, without
dropping possibly-important information on the floor. I hadn't realized
this design goal with the original system, and after talking to @pgavlin,
I better realized the intent and that we want to preserve this.
The filenames we used to store history data locally only had second
level precision. On my machine, the test history test is able to run
multiple `pulumi update` commands in the same second, which causes a
newer history file to overwrite an older one.
This change moves to using a nanosecond precision timestamp when
writing config. In addition, the CLI was trying to sort the updates
that came back from the backend (instead of just trusting them to be
in newest first order, as we documented) so I removed that code as
well.
Migrate configuration from the old model to the new model. The
strategy here is that when we first run `pulumi` we enumerate all of
the stacks from all of the backends we know about and for each stack
get the configuration values from the project and workspace and
promote them into the new file. As we do this, we remove stack
specific config from the workspace and Pulumi.yaml file.
If we are able to upgrade all the stacks we know about, we delete all
global configuration data in the workspace and in Pulumi.yaml as well.
We have a test that ensures upgrades continue to work.
This change updates our configuration model to make it simpler to
understand by removing some features and changing how things are
persisted in files.
Notable changes:
- We've removed the notion of "workspace" vs "project"
config. Now, configuration is always stored in a file next to
`Pulumi.yaml` named `Pulumi.<stack-name>.yaml` (the same file we'd
use for an other stack specific information we would need to persist
in the future).
- We've removed the notion of project wide configuration. Every new
stack gets a completely empty set of configuration and there's no
way to share common values across stacks, instead the common value
has to be set on each stack.
We retain some of the old code for the configuration system so we can
support upgrading a project in place. That will happen with the next
change.
This change fixes some issues and allows us to close some
others (since they are no longer possible).
Fixes#866Closes#872Closes#731
We are going to be changing the configuration model. To begin, let's
take most of the existing stuff and mark it as "deprecated" so we can
keep the existing behavior (to help transition newer code forward)
while making it clear what APIs should not be called in the
implementation of `pulumi` itself.
Per pulumi/pulumi#984, we will now issue an error if it appears you're
importing a checkpoint from a different stack. This can be overridden
if you know what you're doing (with --force), but in general this is a
sign that you're doing something very wrong that will be hard to undo.
Despite our good progress moving towards having an apitype package,
where our exchange types live and can be shared among the engine and
our services, there were a few major types that were still duplciated.
Resource was the biggest example -- and indeed, the apitype varirant
was missing the new Dependencies property -- but there were others,
like Manfiest, PluginInfo, etc. These too had semi-random omissions.
This change merges all of these types into the apitype package. This
not only cleans up the redundancy and missing properties, but will
"force the issue" with respect to keeping them in sync and properly
versioning the information in a backwards compatible way.
The resource/stack package still exists as a simple marshaling layer
to and from the engine's core data types.
Finally, I've made the controversial change to share the actual
Deployment data structure at the apitype layer also. This will force
us to confront differences in that data structure similarly, and will
allow us to leverage the strong typing throughout to catch issues.
If currently logged in, `stack init` creates a managed stack. Otherwise, it creates a local
stack. This avoids the need to specify `--local` when not using the service.
As today, `--local` can be passed, which will create a local stack regardless of being logged
in or not.
A new flag, `--remote`, has been added, which can be passed to indicate a managed stack,
used to force an error if not logged into the service.
1. Output different-colored edges for parent-child resource
relationships
2. Allow the changing of edge colors via command-line parameters
3. Allow the skipping of the parent-child graph or the
dependency graph when calculating all edges
This modifies the Graph interface slightly to allow an edge to specify
what color should be used when drawing it.
Some file systems do not record BithTimes and BirthTime panics in
these cases. We use HasBirthTimes to guard against this and print n/a
when we do not have a BirthTime.
This commit does two things:
1. All dependencies of a resource, both implicit and explicit, are
communicated directly to the engine when registering a resource. The
engine keeps track of these dependencies and ultimately serializes
them out to the checkpoint file upon successful deployment.
2. Once a successful deployment is done, the new `pulumi stack
graph` command reads the checkpoint file and outputs the dependency
information within in the DOT format.
Keeping track of dependency information within the checkpoint file is
desirable for a number of reasons, most notably delete-before-create,
where we want to delete resources before we have created their
replacement when performing an update.
I was reminded of this yesterday with unprintable characters as I
debugged some things on Windows. Inspired by Yarn, this change adds
a new flag --emoji (-e for short) that can be used to control whether
we show ASCII-only characters or not in the console. On Mac, it
defaults to true, and on Windows and Linux, it defaults to false.
This also brings back the retro ASCII-friendly progress spinner
when --emoji is disabled.
This adds support for two things:
* Installing all plugins that a project requires with a single command:
$ pulumi plugin install
* Listing the plugins that this project requires:
$ pulumi plugin ls --project
$ pulumi plugin ls -p
Prior to this change, we had a flat list of files in the
~/.pulumi/plugins directory. This was simple but unfortunately
too naive, since we in fact have multi-file plugins already.
Dumping them in the same directory increases the risk of a
collision. Instead, let's put them in their own directories.
This means, for example, you'll see things like
~/.pulumi/plugins/
resource-aws-v0.11.0-dev-8-g57a0d62/
README.txt
pulumi-resource-aws
Notice that the binary name stays the same -- e.g., in this
case pulumi-resource-aws -- and does not include the version.
This makes it simple to add it to your $PATH in the usual ways
and have it loaded as a preferred location.
This change renames prune to rm, to match what we use for other
similar commands. Someday perhaps we will add a prune that uses
some smarts to prune old plugins, etc.
Also tidy up some minor things about the command. For example,
we now require --all if you want to truly clear the entire plugin
cache. We also print more detail, like the full list of plugins
to be removed, in the confirmation prompt.
This brings back the Node.js language plugin's GetRequiredPlugins
function, reimplemented in Go now that the language host has been
rewritten from JavaScript. Fairly rote translation, along with
some random fixes required to get tests passing again.
This enables you to install a plugin directly from a file, rather
than the default of downloading it from our release share. This is
primarily useful as a test tool, but will also be a useful escape
hatch for 3rd party extensibility, where we do not have a share.
$ pulumi plugin install resource aws v0.1.0 -f my_aws_provider.tgz
This change adds a GetRequiredPlugins RPC method to the language
host, enabling us to query it for its list of plugin requirements.
This is language-specific because it requires looking at the set
of dependencies (e.g., package.json files).
It also adds a call up front during any update/preview operation
to compute the set of plugins and require that they are present.
These plugins are populated in the cache and will be used for all
subsequent plugin-related operations during the engine's activity.
We now cache the language plugins, so that we may load them
eagerly too, which we never did previously due to the fact that
we needed to pass the monitor address at load time. This was a
bit bizarre anyhow, since it's really the Run RPC function that
needs this information. So, to enable caching and eager loading
-- which we need in order to invoke GetRequiredPlugins -- the
"phone home" monitor RPC address is passed at Run time.
In a subsequent change, we will switch to faulting in the plugins
that are missing -- rather than erroring -- in addition to
supporting the `pulumi plugin install` CLI command.
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
Previously, we would just use normal go formatting when displaying
output values. This was fine for simple values like strings and ints,
but for arrays or objects, you'd end up with values that looked a
little stange.
We now run the objects through json.Marshal first, to get nicer string
values for more complex objects. However, when the top level value is
a single string, we elide the quotes. This is not true JSON, but it
displays much nicer.
When we add something like `--format=json` (see pulumi#496) it will
provide a way to treat output unfiormly as JSON.
Fixes#736
This addresses pulumi/pulumi#446: what we used to call "package" is
now called "project". This has gotten more confusing over time, now
that we're doing real package management.
Also fixespulumi/pulumi#426, while in here.
At one point `pulumi update` was spelled `pulumi push` and we wrote
some help documentation about that. When we changed to `pulumi update`
we did not revise the documentation.
Fixes#925
When reading a configuration value from standard in and standard in is
not connected to a terminal, read until EOF and then trim a trailing
newline (if present) to get the value
Fixes#822
The existing logic would flow colorization information into the
engine, so depending on the settings in the CLI, the engine may or may
not have emitted colorized events. This coupling is not great and we
want to start moving to a world where the presentation happens
exclusively at the CLI level.
With this change, the engine will always produce strings that have the
colorization formatting directives (i.e. the directives that
reconquest/loreley understands) and the CLI will apply
colorization (which could mean either running loreley to turn the
directives into ANSI escape codes, or drop them or retain them, for
debuging purposes).
Fixes#742
This PR adds a new `pulumi history` command, which prints the update history for a stack.
The local backend stores the update history in a JSON file on disk, next to the checkpoint file. The cloud backend simply provides the update metadata, and expects to receive all the data from a (NYI) `/history` REST endpoint.
`pkg/backend/updates.go` defines the data that is being persisted. The way the data is wired through the system is adding a new `backend.UpdateMetadata` parameter to a Stack/Backend's `Update` and `Destroy` methods.
I use `tests/integration/stack_outputs/` as the simple app for the related tests, hence the addition to the `.gitignore` and fixing the name in the `Pulumi.yaml`.
Fixes#636.
The engine will use this flag to decide whether or not to provide
undefined input property values to resource providers. This is
important because an input property can be defined if it is sourced from
another resource's output property that is not known to be stable (i.e.
that property is not known to be consistent between preview and apply).
Failing to provide these undefined values can then cause input
validation to fail.
`pulumi stack init` defaults to trying to create a stack in the Pulumi
Cloud. If you are not logged in, it prints an error telling you to log
in.
With this change, the error message also points out that you can pass
`--local` to `pulumi stack init` to create the stack locally.
In the Pulumi Cloud, there is no guarantee that two stacks will share
the same encryption key. This means that encrypted config can not be
shared across stacks (in the Pulumi.yaml) file. To mimic this behavior
in the local experience, we now use a unique key per stack.
When upgrading an existing project, for any stack with existing
secrets, we copy the existing key into this stack. Future stacks will
get thier own encryption key. This strikes a balance between
expediency of implementation, the end user UX and not having to make a
breaking change.
As part of this change, I have introduced a CHANGELOG.md file in the
root of the repository and added a small note about the change to it.
Fixes#769
This PR surfaces the configuration options available to updates, previews, and destroys to the Pulumi Service. As part of this I refactored the options to unify them into a single `engine.UpdateOptions`, since they were all overlapping to various degrees.
With this PR we are adding several new flags to commands, e.g. `--summary` was not available on `pulumi destroy`.
There are also a few minor breaking changes.
- `pulumi destroy --preview` is now `pulumi destroy --dry-run` (to match the actual name of the field).
- The default behavior for "--color" is now `Always`. Previously it was `Always` or `Never` based on the value of a `--debug` flag. (You can specify `--color always` or `--color never` to get the exact behavior.)
Fixes#515, and cleans up the code making some other features slightly easier to add.
These changes refactor the engine's entrypoints--Deploy, Destroy, and
Preview--to be update-centric rather than stack-centric. Each of these
methods now takes a value of a new type, Update, that abstracts away the
vagaries of fetching and maintaining the update's state. This
refactoring also reinforces Pulumi.yaml as a CLI concept rather than an
engine concept; the CLI is now the only reader/writer of this format.
These changes will smooth the way for a few refactorings on the service
side that will aid in update isolation.
These changes add the ability to export a stack's latest deployment or
import a new deployment to a stack via the Pulumi CLI. These
capabilities are exposed by two new verbs under `stack`:
- export, which writes the current stack's latest deployment to stdout
- import, which reads a new deployment from stdin and applies it to the
current stack.
In the local case, this simply involves reading/writing the stack's
latest checkpoint file. In the cloud case, this involves hitting two new
endpoints on the service to perform the export or import.
Use the new {en,de}crypt endpoints in the Pulumi.com API to secure
secret config values. The ciphertext for a secret config value is bound
to the stack to which it applies and cannot be shared with other stacks
(e.g. by copy/pasting it around in Pulumi.yaml). All secrets will need
to be encrypted once per target stack.
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw). The events from the service, however, are still
boolean-valued. This changes the message payload to carry the full values.
The prior behavior with cloud authentication was a bit confusing
when authenticating against anything but https://pulumi.com/. This
change fixes a few aspects of this:
* Improve error messages to differentiate between "authentication
failed" and "you haven't logged into the target cloud URL."
* Default to the cloud you're currently authenticated with, rather
than unconditionally selecting https://pulumi.com/. This ensures
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls
works, versus what was currently required
$ pulumi login -c https://api.moolumi.io
$ pulumi stack ls -c https://api.moolumi.io
with confusing error messages if you forgot the second -c.
* To do this, our default cloud logic changes to
1) Prefer the explicit -c if supplied;
2) Otherwise, pick the "currently authenticated" cloud; this is
the last cloud to have been targeted with pulumi login, or
otherwise the single cloud in the list if there is only one;
3) https://pulumi.com/ otherwise.
Part of the work to make it easier to tests of diff output. Specifically, we now allow users to pass --color=option for several pulumi commands. 'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).
The meaning of these flags are:
1. auto: colorize normally, unless in --debug
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes. This is for testing purposes and ensures we can have test for this stuff across platform.
This does three things:
* Use nice humanized times for update times, to avoid ridiculously
long timestamps consuming lots of horizontal space. Instead of
LAST UPDATE
2017-12-12 12:22:59.994163319 -0800 PST
we now see
LAST UPDATE
1 day ago
* Use the longest config key for the horizontal spacing when the key
exceeds the default alignment size. This avoids individual lines
wrapping in awkward ways.
* Do the same for stack names and output properties.
This will allow us to remove a lot of current boilerplate in individual tests, and move it into the test harness.
Note that this will require updating users of the integration test framework. By moving to a property bag of inputs, we should avoid needing future breaking changes to this API though.
As articulated in #714, the way config defaults to workspace-local
configuration is a bit error prone, especially now with the cloud
workflow being the default. This change implements several improvements:
* First, --save defaults to true, so that configuration changes will
persist into your project file. If you want the old local workspace
behavior, you can specify --save=false.
* Second, the order in which we applied configuration was a little
strange, because workspace settings overwrote project settings.
The order is changed now so that we take most specific over least
specific configuration. Per-stack is considered more specific
than global and project settings are considered more specific
than workspace.
* We now warn anytime workspace local configuration is used. This
is a developer scenario and can have subtle effects. It is simply
not safe to use in a team environment. In fact, I lost an arm
this morning due to workspace config... and that's why you always
issue warnings for unsafe things.
Instead of unconditionally emitting a message when configuring
values, which is easy to miss, instead print out a more helpful
warning iff you are configuring a plaintext value that you are
also saving to your project.
This change adds a `pulumi stack output` command. When passed no
arguments, it prints all stack output properties, in exactly the
same format as `pulumi stack` does (just without all the other stuff).
More importantly, if you pass a specific output property, a la
`pulumi stack output clusterARN`, just that property will be printed,
in a scriptable-friendly manner. This will help us automate wiring
multiple layers of stacks together during deployments.
This fixespulumi/pulumi#659.
If a cloud you've previously authenticated with goes away -- as ours
sort of did, because the cloud endpoing in the CLI changed (to actually
be correct) -- then you can't logout without manually editing the
credentials file in your workspace. This is a little annoying. So,
rather than that, let's have a `pulumi logout --all` command that just
logs out of all clouds you are presently authenticated with.
At some point, we fixed a bug in the way state is managed for "same"
steps, which meant that we wouldn't see newly added output properties.
This had the effect that, if you had a stack already stood up, and
updated it to have output properties, we would miss them. (Stacks
stood up from scratch would still have them.) This fixes that problem,
in addition to two other things: 1) we need to sort output property
names to ensure a deterministic ordering, and 2) we need to also
unconditionally apply the outputs RPC coming in, to ensure that the
resulting resource always has the correct outputs (so that for example
deleting prior output properties actually deletes them).
Also add some testing for this area to make sure we don't break again.
Fixespulumi/pulumi#631.
We were previously filtering out pulumi:pulumi:Stack resources
from the output of `pulumi stack`. This is perhaps the right thing
to do -- since it's just a logical container and every stack will
contain one -- but it poses problems because the overall experience
right now treats it like a resource. So filtering it is odd in
a few ways: e.g., resource counts look wrong, removing the stack
won't work because there's a hidden resource within it, etc. This
change simply lists it in the output, which seems safe to do for now.
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
This change adds a new manifest section to the checkpoint files.
The existing time moves into it, and we add to it the version of
the Pulumi CLI that created it, along with the names, types, and
versions of all plugins used to generate the file. There is a
magic cookie that we also use during verification.
This is to help keep us sane when debugging problems "in the wild,"
and I'm sure we will add more to it over time (checksum, etc).
For example, after an up, you can now see this in `pulumi stack`:
```
Current stack is demo:
Last updated at 2017-12-01 13:48:49.815740523 -0800 PST
Pulumi version v0.8.3-79-g1ab99ad
Plugin pulumi-provider-aws [resource] version v0.8.3-22-g4363e77
Plugin pulumi-langhost-nodejs [language] version v0.8.3-79-g77bb6b6
Checkpoint file is /Users/joeduffy/dev/code/src/github.com/pulumi/pulumi-aws/.pulumi/stacks/webserver/demo.json
```
This addresses pulumi/pulumi#628.
This lets us disable integrity checking in case the tool refuses to
proceed and we want to force it, for use as a last resort. Someday
we'll probably flip the polarity to --enable-integrity-checking if
we find that checking takes too long (or maybe add a "quick" option).
This change introduces automatic integrity checking for snapshots.
Hopefully this will help us track down what's going on in
pulumi/pulumi#613. Eventually we probably want to make this opt-in,
or disable it entirely other than for internal Pulumi debugging, but
until we add more complete DAG verification, it's relatively cheap
and is worthwhile to leave on for now.
This PR just wires the `Package.Main` field to the Pulumi Service (and in subsequent PRs, the `pulumi-service` and `pulumi-ppc` repos).
@joeduffy , should we just upload the entire `package.Package` type with the `UpdateProgramRequest` type? I'm not sure we want to treat that type as part of part of our public API surface area. But on the other hand, we'll need to mirror relevant fields in N places if we don't.
Previously, we were inconsistent on how we handled argument validation
in the CLI. Many commands used cobra.Command's Args property to
provide a validator if they took arguments, but commands which did not
rarely used cobra.NoArgs to indicate this.
This change does two things:
1. Introduce `cmdutil.ArgsFunc` which works like `cmdutil.RunFunc`, it
wraps an existing cobra type and lets us control the behavior when an
arguments validator fails.
2. Ensure every command sets the Args property with an instance of
cmdutil.ArgsFunc. The cmdutil package defines wrapers for all the
cobra validators we are using, to prevent us from having to spell out
`cmduitl.ArgsFunc(...)` everywhere.
Fixes#588
This change simplifies the necessary RPC changes for components.
Instead of a Begin/End pair, which complicates the whole system
because now we have the opportunity of a missing End call, we will
simply let RPCs come in that append outputs to existing states.
We need to invoke the post-step event hook *after* updating the
state snapshots, so that it will write out the updated state.
We also need to re-serialize the snapshot again after we receive
updated output properties, otherwise they could be missing if this
happens to be the last resource (e.g., as in Stacks).
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
Parallelize collection of logs at each layer of the resource tree walk. Since almost all cost of log collection is at leaf nodes, this should allow the total time for log collection to be close to the time taken for the single longest leaf node, instead of the sum of all leaf nodes.
Outside of `.pulumiignore` we support a few "default" excludes that
try to push folks towards a pit of succes.
Previously, there was no way to opt out of these, which would be bad
if our huristics caused something youto really care about to be
elided. With this change, we add an optional setting in Pulumi.yaml
that allows you to opt out of this behavior.
As part of the work, I changed .git to be one of these "default"
excludes instead of it only happening if you had a .pulumiignore file
in a directory
When working on the `.pulumiignore` support, I added a small command
that would just dump the file we sent to the service (without actually
talking to the service) so I could do some measurements on archive
sizes.
I mentioned this to Chris who said he had hacked up a similar thing
for something he was working on, so it felt worthwhile to check this
command in (hidden behind a flag) to share the implementation.
This command is hidden behind an environment variable, so customers
will not see it unless they opt in.
This change adds back component output properties. Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
Adds support for top-level exports in the main script of a Pulumi Program to be captured as stack-level output properties.
This create a new `pulumi:pulumi:Stack` component as the root of the resource tree in all Pulumi programs. That resources has properties for each top-level export in the Node.js script.
Running `pulumi stack` will display the current value of these outputs.
A handful of UX improvments for config:
- `pulumi config ls` has been removed. Now, `pulumi config` with no
arguments prints the table of configuration values for a stack and
a new command `pulumi config get <key>` prints the value for a
single configuration key (useful for scripting).
- `pulumi config text` and `pulumi config secret` have been merged
into a single command `pulumi config set`. The flag `--secret` can
be used to encrypt the value we store (like `pulumi config secret`
used to do).
- To make it obvious that setting a value with `pulumi config set` is
in plan text, we now echo a message back to the user saying we
added the configuration value in plaintext.
Fixes#552
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)
Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.
The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.
The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
This is the smallest possible thing that could work for both the local
development case and the case where we are targeting the Pulumi
Service.
Pulling down the pulumiframework and component packages here is a bit
of a feel bad, but we plan to rework the model soon enough to a
provider model which will remove the need for us to hold on to this
code (and will bring back the factoring where the CLI does not have
baked in knowledge of the Pulumi Cloud Framework).
Fixes#527
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.
The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.
Fixes#551.
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.
We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.
The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.
We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.
In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.
We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.
We currently support sending tracing data to a Zipkin-compatible endpoint. This has been validated with both Zipkin and Jaeger UIs.
We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface. There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
This code was swallowing an error incorrectly, rather than returning
it. As a result, certain commands would fail with assertions rather
than the intended error message (like trying to config without a stack).
Rework the polling code for `pulumi` when targeting hosted scenarios. (i.e. absorb https://github.com/pulumi/pulumi-service/pull/282.) We now return an `updateID` for update, preview, and destroy. And use bake that into the URL.
This PR also silently ignores 504 errors which are now returned from the Pulumi Service if the PPC request times out.
Combined, this should ameliorate some of the symptoms we see from https://github.com/pulumi/pulumi-ppc/issues/60. Though we'll continue to try to identify and fix the root cause.
Previously, we stored configuration information in the Pulumi.yaml
file. This was a change from the old model where configuration was
stored in a special section of the checkpoint file.
While doing things this way has some upsides with being able to flow
configuration changes with your source code (e.g. fixed values for a
production stack that version with the code) it caused some friction
for the local development scinerio. In this case, setting
configuration values would pend changes to Pulumi.yaml and if you
didn't want to publish these changes, you'd have to remember to remove
them before commiting. It also was problematic for our examples, where
it was not clear if we wanted to actually include values like
`aws:config:region` in our samples. Finally, we found that for our
own pulumi service, we'd have values that would differ across each
individual dev stack, and publishing these values to a global
Pulumi.yaml file would just be adding noise to things.
We now adopt a hybrid model, where by default configuration is stored
locally, in the workspace's settings per project. A new flag `--save`
tests commands to actual operate on the configuration information
stored in Pulumi.yaml.
With the following change, we have have four "slots" configuration
values can end up in:
1. In the Pulumi.yaml file, applies to all stacks
2. In the Pulumi.yaml file, applied to a specific stack
3. In the local workspace.json file, applied to all stacks
4. In the local workspace.json file, applied to a specific stack
When computing the configuration information for a stack, we apply
configuration in the above order, overriding values as we go
along.
We also invert the default behavior of the `pulumi config` commands so
they operate on a specific stack (i.e. how they did before
e3610989). If you want to apply configuration to all stacks, `--all`
can be passed to any configuration command.
This change introduces an abstraction for a `backend` which manages
the implementation of some CLI commands. As part of defining the
interface, we introduce a new local backend implementation that just
uses data local to the machine.
This will let us share argument parsing and some display information
between the local case and the pulumi.com case in the CLI. We can
continue to refine this interface over time (e.g. today we have the
implementation of the Destroy/Update/Preview actually writing output
but instead they should be returning strongly typed data that the CLI
knows how to display and is unified across Pulumi.com deploys and
local deploys).
But this is a good first step.
Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.
The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.
This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.
This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.
As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
The last status message from the PPC doesn't include a newline. So the `pulumi` CLI renders any error messages on the same line as the last diagnostic message. Not ideal.
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.
When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.
We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.
When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
- `pulumi config ls` now does not prompt for a passphrase if there are
secrets, instead ******'s are shown. `--show-secrets` can be passed
to force decryption. The behavior of `pulumi config ls <key>` is
unchanged, if the key is secure, we will prompt for a passphrase.
- `pulumi config secret <key>` now prompts for the passphrase and verifies
it before asking for the secret value.
Fixes#465
This PR enables the `pulumi stack init` to work against the Pulumi Cloud. Of note, I using the approach described in https://github.com/pulumi/pulumi-service/issues/240. The command takes an optional `--cloud` parameter, but otherwise will use the "default cloud" for the target organization.
I also went back and revised `pulumi update` to do this as well. (Removing the `--cloud` parameter.)
Note that neither of the commands will not work against `pulumi-service` head, as they require some API refactorings I'm working on right now.
Fixes panic when the output from the PPC doesn't have a "text" property. (Still need to unify the way the PPC emits event data with the types that the Pulumi codebase uses internally.)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).
As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.
We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow.
Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
We now encrypt secrets at rest based on a key derived from a user
suplied passphrase.
The system is designed in a way such that we should be able to have a
different decrypter (either using a local key or some remote service
in the Pulumi.com case in the future).
Care is taken to ensure that we do not leak decrypted secrets into the
"info" section of the checkpoint file (since we currently store the
config there).
In addtion, secrets are "pay for play", a passphrase is only needed
when dealing with a value that's encrypted. If secure config values
are not used, `pulumi` will never prompt you for a
passphrase. Otherwise, we only prompt if we know we are going to need
to decrypt the value. For example, `pulumi config <key>` only prompts
if `<key>` is encrypted and `pulumi deploy` and friends only prompt if
you are targeting a stack that has secure configuration assoicated
with it.
Secure values show up as unecrypted config values inside the language
hosts and providers.
When `PULUMI_RETAIN_CHECKPOINTS` is set in the environment, also write
the checkpoint file to <stack-name>.<ext>.<timestamp>.
This ensures we have historical information about every snapshot, which
would aid in debugging issues like #451. We set this to true for our
integration tests.
Fixes#453
The event diagnostic goroutines could error out sometimes during
early program exits, due to a race between the goroutine writing to
the channel and the early exiting goroutine which closed the channel.
This change stops closing the channels entirely on the abrupt exit
paths, since it's not necessary and we want to exit immediately.
This improves a few things about assets:
* Compute and store hashes as input properties, so that changes on
disk are recognized and trigger updates (pulumi/pulumi#153).
* Issue explicit and prompt diagnostics when an asset is missing or
of an unexpected kind, rather than failing late (pulumi/pulumi#156).
* Permit raw directories to be passed as archives, in addition to
archive formats like tar, zip, etc. (pulumi/pulumi#240).
* Permit not only assets as elements of an archive's member list, but
also other archives themselves (pulumi/pulumi#280).
The existing `SnapshotProvider` interface does not sufficiently lend
itself to reliable persistence of snapshot data. For example, consider
the following:
- The deployment engine creates a resource
- The snapshot provider fails to save the updated snapshot
In this scenario, we have no mechanism by which we can discover that the
existing snapshot (if any) does not reflect the actual state of the
resources managed by the stack, and future updates may operate
incorrectly. To address this, these changes split snapshotting into two
phases: the `Begin` phase and the `End` phase. A provider that needs to
be robust against the scenario described above (or any other scenario
that allows for a mutation to the state of the stack that is not
persisted) can use the `Begin` phase to persist the fact that there are
outstanding mutations to the stack. It would then use the `End` phase to
persist the updated snapshot and indicate that the mutation is no longer
outstanding. These steps are somewhat analogous to the prepare and
commit phases of two-phase commit.
I sometimes revert back to some ancient version of the system, and
I figure with so many other tools using different verbs here, it's
worth at least improving our help text with the SuggestFors.
The change to use a Goroutine for pumping output causes a hang
when an error occurs. This is because we unconditionally block
on the <-done channel, even though the failure means the done
will actually never occur. This changes the logic to only wait
on the channel if we successfully began the operation in question.
Previously, config information was stored per stack. With this change,
we now allow config values which apply to every stack a program may
target.
When passed without the `-s <stack>` argument, `pulumi config`
operates on the "global" configuration. Stack specific information can
be modified by passing an explicit stack.
Stack specific configuration overwrites global configuration.
Conside the following Pulumi.yaml:
```
name: hello-world
runtime: nodejs
description: a hello world program
config:
hello-world:config:message Hello, from Pulumi
stacks:
production:
config:
hello-world:config:message Hello, from Production
```
This program contains a single configuration value,
"hello-world:config:message" which has the value "Hello, from Pulumi"
when the program is activated into any stack except for "production"
where the value is "Hello, from Production".
Instead of having information stored in the checkpoint file, save it
in the Pulumi.yaml file. We introduce a new section `stacks` which
holds information specific to a stack.
Next, we'll support adding configuration information that applies
to *all* stacks for a Program and allow the stack specific config to
overwrite or augment it.
This PR adds `login` and `logout` commands to the `pulumi` CLI.
Rather than requiring a user name and password like before, we instead require users to login with GitHub credentials on the Pulumi Console website. (You can do this now via https://beta.moolumi.io.) Once there, the account page will show you an "access token" you can use to authenticate against the CLI.
Upon successful login, the user's credentials will be stored in `~/.pulumi/credentials.json`. This credentials file will be automatically read with the credentials added to every call to `PulumiRESTCall`.
We use `git describe --tags` to construct a version number based on
the current version tag.
The properties VERSION (when using make) and Version (when using
MSBuild) can be explicitly set to use a fixed value instead.
Fixes#13
Previously, you had to fully qualify configuration values (e.g
example:config:message). As a convience, let's support adding
configuration values where the key is not a fully qualified module
member. In this case, we'll treat the key as if
`<program-name>:config:` had been prepended to it.
In addition, when we print config, shorten keys of the form
`<program-name>:config:<key-name>` to `<key-name>`.
I've updated one integration test to use the new syntax and left the
other as is to ensure both continue to work.
Previously we used the word "Environment" as the term for a deployment
target, but since then we've started to use the term Stack. Adopt this
across the CLI.
From a user's point of view, there are a few changes:
1. The `env` verb has been renamed to `stack`
2. The `-e` and `--env` options to commands which operate on an
environment now take `-s` or `--stack` instead.
3. Becase of (2), the commands that used `-s` to display a summary now
only support passing the full option name (`--summary`).
On the local file system, we still store checkpoint data in the `env`
sub-folder under `.pulumi` (so we can reuse existing checkpoint files
that were written to the old folder)
Our checkpoint file is now in a shape such that go's built in
marshalling and unmarshalling is sufficent, so we can remove the code
we had on our decode path to do extra validation
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).
While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.
As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).
The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
The checkpoint is an implementation detail of the storage of an
environment. Instead of interacting with it, make sure that all the
data we need from it either hangs off the Snapshot or Target
objects (which you can get from a Checkpoint) and then start consuming
that data.
Internally we end up using flag to parse our command lines because of
our dependency on glog. When flag does not know about -C, if someone
passes -C before the sub command name (as is common for folks coming
from Git) the flag package terminates the program because -C was not
defined. So just teach flag this exists until we takle #301 and rid
ourselves of glog.
Previously, you could pass an explicit path to a Pulumi program when
running preview or update and the tool would use that program when
planning or deploying, but continue to write state in the cwd. While
being able to operate on a specific package without having to cd'd all
over over the place is nice, this specific implemntation was a little
scary because it made it easier to run two different programs with the
same local state (e.g config and checkpoints) which would lead to
surprising results.
Let's move to a model that some tools have where you can pass a
working directory and the tool chdir's to that directory before
running. This way any local state that is stored will be stored
relative to the package we are operating on instead of whatever the
current working directory is.
Fixes#398
Previously, the engine was concered with maintaing information about
the currently active environment. Now, the CLI is in charge of
this. As part of this change, the engine can now assume that every
environment has a non empty name (and I've added asserts on the
entrypoints of the engine API to ensure that any consumer of the
engine passes a non empty environment name)
The CLI is now responsible for actually displaying information and the
engine is only concerned with getting the configuration. As part of
this change, I've removed the display a single configuration value API
from the engine. It can now be done in terms of getting all the config
for an environment and selecting the value the user is interested in
Previously the engine was concerned with displaying information about
the environment. Now the engine returns an environment info object
which the CLI uses to display environment information.
This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
This change flips the polarity on parallelism: rather than having a
--serialize flag, we will have a --parallel=P flag, and by default
we will shut off parallelism. We aren't benefiting from it at the
moment (until we implement pulumi/pulumi-fabric#106), and there are
more hidden dependencies in places like AWS Lambdas and Permissions
than I had realized. We may revisit the default, but this allows
us to bite off the messiness of dependsOn only when we benefit from
it. And in any case, the --parallel=P capability will be useful.
This change adds an optiona dependsOn parameter to Resource constructors,
to "force" a fake dependency between resources. We have an extremely strong
desire to resort to using this only in unusual cases -- and instead rely
on the natural dependency DAG based on properties -- but experience in other
resource provisioning frameworks tells us that we're likely to need this in
the general case. Indeed, we've already encountered the need in AWS's
API Gateway resources... and I suspect we'll run into more especially as we
tackle non-serverless resources like EC2 Instances, where "ambient"
dependencies are far more commonplace.
This also makes parallelism the default mode of operation, and we have a
new --serialize flag that can be used to suppress this default behavior.
Full disclosure: I expect this to become more Make-like, i.e. -j 8, where
you can specify the precise width of parallelism, when we tackle
pulumi/pulumi-fabric#106. I also think there's a good chance we will flip
the default, so that serial execution is the default, so that developers
who don't benefit from the parallelism don't need to worry about dependsOn
in awkward ways. This tends to be the way most tools (like Make) operate.
This fixespulumi/pulumi-fabric#335.
As explained in pulumi/pulumi-fabric#293, we were a little ad-hoc in
how configuration was "applied" to resource providers.
In fact, config wasn't ever communicated directly to providers; instead,
the resource providers would simply ask the engine to read random heap
locations (via tokens). Now that we're on a plan where configuration gets
handed to the program at startup, and that's that, and where generally
speaking resource providers never communicate directly with the language
runtime, we need to take a different approach.
As such, the resource provider interface now offers a Configure RPC
method that the resource planning engine will invoke at the right
times with the right subset of configuration variables filtered to
just that provider's package. This fixespulumi/pulumi#293.
The existing implementation of the interface (backed by the file
system) has moved into cmd/lumi. The deployment service will start to
provide its own version.
`saveEnv` had a flag which would prevent an environment from being
overwritten if it already existed, which was only used by `lumi env
init`. Refactor the code so the check is done inside `lumi` instead of
against this API. We don't need this functionality for the service and
so requiring support for this at the API boundary for environments
feels like a bad idea.
We'd like to abstract out environment CRUD operations and I'd prefer
not to have to bake in the conspect of a file name like thing in the
abstraction. Since we were not really using this feature many places,
let's just get rid of it.
This refactors the engine so all of the APIs on it are instance
methods on the type instead of raw methods that float around and use
data from a global engine.
A mechcanical change as we remove the global `E` and then make
anything that interacted with that in pkg/engine to be an instance
method and the dealing with the fallout.
Prevously, we would throw raw args arrays across the interface and the
engine would do some additional parsing. Clean this up so we don't do
that and all the parsing stays in `lumi`
The plan is to take all the logic that actually implements the
commands exposed by `lumi` into a helper type that can be used by both
`lumi` and the Pulumi Service. This is step one, which was to decouple
the implementation of these commands from the command line parsing and
CLI interface they are presented to the user from.
We were passing along the entire args array to the implementation of
most commands, but the only place this was used was to pass one piece
of information to the compiler we create in one case. Let's get
explicit about the stuff we share from the CLI layer into the
implementation of the commands and make this stuff well typed instead
of a bag of strings.
Just use the args local directly instead of using the reference from
envCmdInfo. Doing this will make it easier to remove the Args field of
envCmdInfo, which I want to refactor to be more specific to the
boundary between the CLI and Planning/Deploying.
This changes a few things in the CLI, mostly just prettying it up:
* Label all steps more clearly with the kind of step. Also
unify the way we present this during planning and deployment.
* Summarize the changes that *did not* get made just as clearly
as those that did. In other words, stuff like this:
info: 2 resources changed:
+1 resource created
-1 resource deleted
5 resources unchanged
and
info: no resources required
5 resources unchanged
* Always print output properties when they are pertinent.
This includes creates, replacements, and updates.
* Show replacement creates and deletes very distinctly. The
create parts show up minty green and the delete parts show up
rosey red. These are the "physical" steps, compared to the
"logical" step of replacement (which remains marigold).
I still don't love where we are here. The asymmetry between
planning and deployment bugs me, and could be surprising.
("Hey, my deploy doesn't look like my plan!") I don't know
what developers will want to see here and I feel like in
general we are spewing far too much into the CLI to make it
even useful for anything but diagnosing failures afterwards.
I propose that we should do a deep dive on this during the
CLI epic, pulumi/pulumi-service#2.
This resolvespulumi/pulumi-fabric#305.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This changes the RPC interfaces between Lumi and provider ever so
slightly, so that we can track default properties explicitly. This
is required to perform accurate diffing between inputs provided by
the developer, inputs provided by the system, and outputs. This is
particularly important for default values that may be indeterminite,
such as those we use in the bridge to auto-generate unique IDs.
Otherwise, we fail to reapply defaults correctly, and trick the
provider into thinking that properties changed when they did not.
This is a small step towards pulumi/lumi#306, in which we will defer
even more responsibility for diffing semantics to the providers.
Report an error when Lumi runtime compilation fails.
Also adds a reusable install_release.sh script to use
for installing Lumi package releases, plus expansion
of symlinks in package Makefiles.
This change brings the same typed serialization we use for RPC
to the serialization of deployments. This ensures that we get
repeatable diffs from one deployment to the next.
This change introduces a --debug option to the plan, deploy, and
destroy commands. Unlike --logtostderr, which merely hooks into the
copious Glogging that we perform (and is therefore meant for developers
of the tools themselves and not end users), --debug hooks into the
user-facing debug stream. This now includes any debug messages coming
from the resource providers as they perform their tasks.
We would like to allow developers to use async/await
on the inside (Node.js) of Lumi programs.
We now support (don't error on) usage of async/await
inside runtime callbacks in Lumi programs. If await is
used during deployment, it will trigger an error.
Also adds support for try/catch in LumiJS, as these are
used more heavily in async/await code.
Since we target Node.js environments without native support
for async/await, we also emit runtime helpers to support TS
transpilation of async/await for Node.js pre-7.6.
This adds a few missing closes for the plugin host/context. This
should fixpulumi/lumi#261. Eventually when we have more robust
nightly test options, and want to spend the time, we should think
about doing more rigorous stress testing that kills processes at
inopportune times and guarantees we don't leak. I've filed
pulumi/lumi#263 to do that.
This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
We were not propagating the error from `deployLatest` through
to the CLI error result. Despite out recent efforts to integrate
gometalinter, there were also several additional similar cases of
ignored error results reported by `errcheck`. Not yet clear why
these are not being reported via gometalinter.
Fixes#262.
After 233c5a8 landed, I noticed there are a few things to be fixed up:
* Run gometalinter in all the right places. We need to run both in
lint and lint_quiet targets. I've also cleaned up some of the logic
around what to suppress so there's less repetition.
* We currently @ meaningful commands, which is unfortunate, since it
makes debugging Makefiles tough (especially when looking at CI build
logs). Going forward, we should only use @ for meaningless commands,
like @echo.
* The AWS project wasn't actually running tslint, because it needs to
say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
The current script of `tslint lib/aws/pack/...` wasn't actually
running lint, hence we missed a lot of AWS lint issues.
* Fix up the issues that these fixes uncovered. Mostly err shadowing.
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface. In summary:
* Instead of using the NullSource for destructions -- which
doesn't hook up an interpreter and so any reads of configuration
variables will fail -- we will enlighten the EvalSource to know
how to orchestrate destruction interpretation. The primary
difference is that we don't actually run the code, but *we do*
perform all of the necessary configuration and variable init.
* Associate the active interpreter with the plugin context as
we are executing, so that the host object can actually read the
state from the heap as requested to do so by attached plugins.
* Rename anything "engine" related to use the term "host"; this
avoids introducing unnecesarily new terminology.
* Add a new pkg/resource/provider/ package where we can begin
consolidating helper functionality for resource providers.
Right now, this includes a wrapper interface atop the gRPC
machinery necessary to contact the host, in addition to a
Main function that hides some boilerplate entrypoint code.
* Add a rpcutil.IsBenignCloseErr routine to let us ignore
"benign" gRPC errors that are knowingly returned at shutdown.
This commit completes pulumi/lumi#117.
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
This change implements showing a summary of the current environment.
All you need to do is run
$ lumi env
and the current environment's information will be printed.
This makes it convenient to grab resource information that might be
required, for instance, to correlate with logs (e.g., lambda ARNs).
Eventually, as per pulumi/lumi#184, we want to print details about
all of the resources too.
Adds an integration test that runs the following commands on the
AWS webserver example, failing if any command returns an error
code:
* lumijs
* lumi env init
* lumi config
* lumi plan
* lumi deploy
* lumi destroy
* lumi env rm
Also ensures that plan and deploy failures propagate errors through
to error codes at the CLI.
The recent change to run the interpreter and planner on separate goroutines
created the need to perform rendezvous-style synchronization between them.
Although the case of an invoked function properly tore down the synchronization
by communicating the error, we seldom directly invoke functions for JavaScript
programs because the way module entrypoint code ends up in initializers.
This requires that we propagate errors correctly out of module and class
initializers, in the standard way, so that the unwind makes its way to the top.
This fixespulumi/lumi#246.
We recently changed the Resource base type to have no constructor,
rather than a manual empty constructor. This ought to work just fine.
The LumiJS compiler indeed generates a constructor, however, it is
missing a body and when the interpreter tries to invoke it, we crash
with a nil reference panic. The runtime actually tolerates missing
constructors entirely, although the way LumiJS binds super calls
doesn't tolerate the missing base constructor. This change simply
generates such constructors in LumiJS with empty bodies.
In addition, I've added an error that will catch the empty body
problem during binding, since technically speaking, all functions
must have bodies. (Our runtime happens to support the notion of
"abstract", however, so we only fire the error on concrete functions.)
When a class has no constructor, we automatically generate an empty
constructor in the Lumipack.
This allows us to adhere to tslint rule suggesting leaving off empty
constructors with default signatures.
This change updates the ID/output propagation logic to properly handle
the case of replacements, in addition to accurately conveying the fact
that an update may change the values of output properties (but not the ID).
Also fixes a formatting issue with the replacement diffing displays.
This change introduces an OpSame planning step. The reason we need
this is so that we can apply the necessary output properties, including
the ID, even as we are simply walking the plan (i.e., when we aren't
actually performing a deployment). This ensures that the object state
evolves as required to let reads of output properties propagate in the
ways necessary to reproduce past executions of the program.
* Assert new things in new places.
* Log more interesting tidbits during evaluation.
* Invoke the OnStart hook before triggering initializers.
* Tolerate nil prev snapshots during deletion calculation.
* Handle and serialize missing resource IDs as output props.
* Return "done" flag from Rendezvous.Meet.
This change refactors a number of aspects of the CLI's treatment of
steps, in line with the new scheme, and a number of other miscellaneous
and minor fixes. It also regenerates all RPC code impacted by recent renames.
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.
The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
This change, part of pulumi/lumi#90, overhauls quite a bit of the
core resource, planning, environments, and related areas.
The biggest amount of movement comes from the splitting of pkg/resource
into multiple sub-packages. This results in:
- pkg/resource: just the core resource data structures.
- pkg/resource/deployment: all planning and deployment logic.
- pkg/resource/environment: all environment, configuration, and
serialized checkpoint structures and logic.
- pkg/resource/plugin: all dynamically loaded analyzer and
provider logic, including the actual loading and RPC mechanisms.
This also splits the resource abstraction up. We now have:
- resource.Resource: a shared interface.
- resource.Object: a resource that is connected to a live object
that will periodically observe mutations due to ongoing
evaluation of computations. Snapshots of its state may be
taken; however, this is purely a "pre-planning" abstraction.
- resource.State: a snapshot of a resource's state that is frozen.
In other words, it is no longer connected to a live object.
This is what will store provider outputs (ID and properties),
and is what may be serialized into a deployment record.
The branch is in a half-baked state as of this change; more changes
are to come...
For lambdas which will execute at runtime,
we want to allow them to reference Node.js
global variables, like `console`.
This change makes Lumijs generated IL
incrementally more dynamic by preferring to
generate `TryLoadDynamic` over `LoadLocation`
for references to global variables (except for
references to imports).
Also introduces `console.log` in LumiJS, though
it is not yet attached to a Lumi global environment.
Fixes#174.
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties. This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.
This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.
I believe this fixes both #189 and #198.
This change eliminates the use of nonstandard tools in our build:
* `go test` automatically uses `GOMAXPROCS` for its parallelism
setting. In modern Go, this is now set to the number of processors.
So, there is no need to set it explicitly using `nproc`.
* We can avoid `realpath` in the `lumijs` executable by making it
a Node.js file and using a relative require import of main.
* We can avoid `realpath` in our Makefiles by just using `pwd`.
* Makeify more; add a "full build" target
This change uses make for more of our tree. Namely, the AWS provider
and LumiJS compilers each now use make to build and/or install them.
Not only does this bring about some consistency to how we build and
test things, but also made it easy to add a full build target:
$ make all
This target will build, test, and install the core Go tools, the LumiJS
compiler, and the AWS provider, in that order.
Each can be made in isolation, however, which ensures that the inner
loop for those is fast and so that, when it comes to finishing
pulumi/lumi#147, we can easily split them out and make from the top.
There are a few things that annoyed me about the way our CLI works with
directories when loading packages. For example, `lumi pack info some/pack/dir/`
never worked correctly. This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior. This is required to write tests that mock
certain aspects, like loading packages from the filesystem.
In theory, this change incurs zero behavioral changes.
This change fixes up a few things so that updates correctly deal
with output properties. This involves a few things:
1) All outputs stored on the pre snapshot need to get propagated
to the post snapshot during planning at various points. This
ensures that the diffing logic doesn't need to be special cased
everywhere, including both the Lumi and the provider sides.
2) Names are changed to "input" properties (using a new `lumi` tag
option, `in`). These are properties that providers are expected
to know nothing about, which we must treat with care during diffs.
3) We read back properties, via Get, after doing an Update just like
we do after performing a Create. This ensures that if an update
has a cascading impact on other properties, it will be detected.
4) Inspecting a change, prior to updating, must be done using the
computed property set instead of the real one. This is to avoid
mutating the resource objects ahead of actually applying a plan,
which would be wrong and misleading.
This change skips printing output<T> properties as we perform a
deployment, instead showing the real values inline after the resource
has been created. (output<T> is still shown during planning, of course.)
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors. After this change, we will only flow if
--logflow is passed, e.g. as in
$ lumi --logtostderr --logflow -v=9 deploy ...
This change prepares for integrating more planning and deployment logic
closer to the runtime itself. For historical reasons, we ended up with these
in the env.go file which really has nothing to do with deployments anymore.
This change introduces the notion of a computed versus an output
property on resources. Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan. Specifically:
* An output property is any property that is populated by the resource
provider, not code running in the Lumi type system. Because these
values aren't available during planning -- since we have not yet
performed the deployment operations -- they will be latent values in
our runtime and generally missing at the time of a plan. This is no
problem and we just want to avoid marshaling them in inopportune places.
* A computed property, on the other hand, is a different beast altogehter.
Although true one of these is missing a value -- by virtue of the fact
that they too are latent values, bottoming out in some manner on an
output property -- they will appear in serializable input positions.
Not only must we treat them differently during the RPC handshake and
in the resource providers, but we also want to guarantee they are gone
by the time we perform any CRUD operations on a resource. They are
purely a planning-time-only construct.
We need to smuggle metadata from the resource IDL all the way through
to the runtime, so that it knows which things are output properties. In
order to do this, we'll leverage decorators and the support for serializing
them as attributes. This change adds support for the various kinds
(class, property, method, and parameter), in addition to test cases.
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.
In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways. Namely,
operations against Latent<T>s generally produce new Latent<U>s.
During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values. An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange. As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values. My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.
For now, using an unresolved Latent<T> in a conditional will lead
to an error. See pulumi/lumi#67. Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support. That is a missing 1/3.
Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values. Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
Adds support for global secondary indexes on DynamoDB Tables.
Also adds a HashSet API to the AWS provider library. This handles part of #178,
providing a standard way for AWS provider implementations to compute set-based
diffs. This new API is used in both aws.dynamodb.Table and aws.elasticbeanstalk.Environment
currently.
Resolves#137.
This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.
A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.
LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.
Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
This change keeps the lumi prefix on our CLI tools.
As @lukehoban pointed out in person, as soon as we do pulumi/coconut#98,
most people (other than compiler authors themselves) won't actually be
typing the commands. And, furthermore, the commands aren't all that bad.
Eventually I assume we'll want something like `lumi-js`, or
`lumi-js-compiler`, so that binaries are discovered dynamically in a way
that is extensible for future languages. We can tackle this during #98.
This change implements property accessors (getters and setters).
The approach is fairly basic, but is heavily inspired by the ECMAScript5
approach of attaching a getter/setter to any property slot (even if we don't
yet fully exploit this capability). The evaluator then needs to track and
utilize the appropriate accessor functions when loading locations.
This change includes CocoJS support and makes a dent in pulumi/coconut#66.
This change, part of pulumi/coconut#62, adds support for ECMAScript
local functions. This leverages the recent support for lambdas.
The change also adds some new test cases for the various forms.
Here are some examples of supported forms:
function outer() {
// simple named inner function:
function inner1() { .. };
// anonymous inner function (just a lambda):
let inner2 = function() { ... };
// named and bound inner function:
let inner3 = function inner4() { ... };
}
These merely compile into lambdas that have been bound to local
variables with the appropriate names.
The previous shape of SequenceExpression only permitted expressions
in the sequence. This is pretty common in most ILs, however, it usually
leads to complicated manual spilling in the event that a statement is needed.
This is often necessary when, for example, a compiler is deeply nested in some
expression production, and then realizes the code expansion requires a
statement (e.g., maybe a new local variable must be declared, etc).
Instead of requiring complicated code-gen, this change permits SequenceExpression
to contain an arbitrary mixture of expression/statement prelude nodes, terminating
with a single, final Expression which yields the actual expression value. The
runtime bears the burden of implementing this which, frankly, is pretty trivial.
This change recognizes and emits lambdas correctly in CocoJS (as part
of pulumi/coconut#62). The existing CocoIL representation for lambdas
worked just fine for functions, lambdas, and local functions. There
still isn't runtime support, but that comes next.
I've tripped over pulumi/coconut#141 a few times now, particularly with
the sort of dynamic payloads required when creating lambdas and API gateways.
This change implements support for computed property initializers.
This change correctly implements package/module resolution in CIDLC.
For now, this only works for intra-package imports, which is sufficient
for now. Eventually we will need to support this (see pulumi/coconut#138).
* Use --out-rpc, rather than --out-provider, since rpc/ is a peer to provider/.
* Use strongly typed tokens in more places.
* Append "rpc" to the generated RPC package names to avoid conflicts.
* Change the Check function to return []mapper.FieldError, rather than
mapper.DecodeError, to make the common "no errors" case easier (and to eliminate
boilerplate resulting in needing to conditionally construct a mapper.DecodeError).
* Rename the diffs argument to just diff, matching the existing convention.
* Automatically detect changes to "replaces" properties in the PreviewUpdate
function. This eliminates tons of boilerplate in the providers and handles the
90% common case for resource recreation. It's still possible to override the
PreviewUpdate logic, of course, in case there is more sophisticated recreation
logic necessary than just whether a property changed or not.
* Add some comments on some generated types.
* Generate property constants for the names as they will appear in weakly typed
property bags. Although the new RPC interfaces are almost entirely strongly
typed, in the event that diffs must be inspected, this often devolves into using
maps and so on. It's much nicer to say `if diff.Changed(SecurityGroup_Description)`
than `if diff.Changed("description")` (and catches more errors at compile-time).
* Fix resource ID generation logic to properly fetch the Underlying() type on
named types (this would sometimes miss resources during property analysis, emitting
for example `*VPC` instead of `*resource.ID`).
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.
I've been kicking the tires with this locally enough to checkpoint the
current version. There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
This change introduces TryLoadDynamicExpression. This is similar to
the existing LoadDynamicExpression opcode, except that it will return
null in response to a missing member (versus the default of raising
an exception). This is to enable languages like JavaScript to encode
operations properly (which always yields undefined/nulls), while still
catering to languages like Python (which throw exceptions).
This change introduces decorator support for CocoJS and the corresponding
IL/AST changes to store them on definition nodes. Nothing consumes these
at the moment, however, I am looking at leveraging this to indicate that
certain program fragments are "code" and should be serialized specially
(in support of Functions-as-lambdas).
This uses %q to quote property values when printing them. This ensures
that control characters are escaped (like \n), in addition to replacing any
unprintable characters with the appropriate escape sequence. Both show up
nicer in the output for planning commands, etc.
This change introduces the scaffolding for a new cmd/cocogo tool,
as part of pulumi/coconut#133. The idea here is to do some very
rudimentary code-gen on a subset of Go, to ease the task of writing
providers. The README describes this in more detail. Eventually
this will presumably expand to being a peer language to CocoPy,
etc., in that real code can be written, but for now it's mostly IDL.
At the moment, the tool really doesn't do anything useful, other
than loading up, parsing, semantically validating, and spewing
some information about the Go packages passed at the command line.
This change restructures the overall structure for commands so that
all top-level tools are in the cmd/ directory, alongside the primary
coco command. This is more "idiomatic Go" in its layout, and makes
room for additional command line tools (like cocogo for IDL).
The old model for imports was to use top-level declarations on the
enclosing module itself. This was a laudible attempt to simplify
matters, but just doesn't work.
For one, the order of initialization doesn't precisely correspond
to the imports as they appear in the source code. This could incur
some weird module initialization problems that lead to differing
behavior between a language and its Coconut variant.
But more pressing as we work on CocoPy support, it doesn't give
us an opportunity to dynamically bind names in a correct way. For
example, "import aws" now needs to actually translate into a variable
declaration and assignment of sorts. Furthermore, that variable name
should be visible in the environment block in which it occurs.
This change switches imports to act like statements. For the most
part this doesn't change much compared to the old model. The common
pattern of declaring imports at the top of a file will translate to
the imports happening at the top of the module's initializer. This
has the effect of initializing the transitive closure just as it
happened previously. But it enables alternative models, like imports
inside of functions, and -- per the above -- dynamic name binding.
This rearranges the way dynamic loads work a bit. Previously, they¬
required an object, and did a dynamic lookup in the object's property¬
map. For real dynamic loads -- of the kind Python uses, obviously,¬
but also ECMAScript -- we need to search the "environment".
This change searches the environment by looking first in the lexical¬
scope in the current function. If a variable exists, we will use it.¬
If that misses, we then look in the module scope. If a variable exists¬
there, we will use it. Otherwise, if the variable is used in a non-lval
position, an dynamic error will be raised ("name not declared"). If
an lval, however, we will lazily allocate a slot for it.
Note that Python doesn't use block scoping in the same way that most
languages do. This behavior is simply achieved by Python not emitting
any lexically scoped blocks other than at the function level.
This doesn't perfectly achieve the scoping behavior, because we don't
yet bind every name in a way that they can be dynamically discovered.
The two obvious cases are class names and import names. Those will be
covered in a subsequent commit.
Also note that we are getting lucky here that class static/instance
variables aren't accessible in Python or ECMAScript "ambiently" like
they are in some languages (e.g., C#, Java); as a result, we don't need
to introduce a class scope in the dynamic lookup. Some day, when we
want to support such languages, we'll need to think about how to let
languages control the environment probe order; for instance, perhaps
the LoadDynamicExpression node can have an "environment" property.
We already had the ability to manually execute a CocoPack and generate
a DOT from its object graph. However, for demo purposes we also want
to be able to generate one from the plan. This adds a --dot flag to plan.
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment. It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!
In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."
By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)
The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag. All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.
As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
This change adds a `coco plan` command which is simply a shortcut
to the more verbose `coco deploy --dry-run`. This will make demos
flow nicer and elevates planning, an important activity, to a more
prominent position. The `--dry-run` (aka `-n`) flag is still there.
This change also renames `coco env destroy` to just `coco destroy`.
This is consistent with deploy and plan being at the top-level. We
now use `coco env` purely for evironment management commands (init,
config, rm, etc).
The word "fatal" makes it look like Coconut did something wrong, when in fact,
these messages are used to convey mis-usage of the command/argument/etc.
This change adds the ability to specify analyzers in two ways:
1) By listing them in the project file, for example:
analyzers:
- acmecorp/security
- acmecorp/gitflow
2) By explicitly listing them on the CLI, as a "one off":
$ coco deploy <env> \
--analyzer=acmecorp/security \
--analyzer=acmecorp/gitflow
This closes out pulumi/coconut#119.
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119. In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource. The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules). These run simultaneous to overall checking.
Analyzers are loaded as plugins just like providers are. The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.
This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
Deployments are central to the entire system; although technically
a deployment is indeed associated with an environment, the deployment
is the focus, not the environment, so it makes sense to put the
deployment command at the top-level.
Before, you'd say:
$ coco env deploy production
And now, you will say:
$ coco deploy production
This change organizes all package management commands underneath
the top-level subcommand `nut`; so, for example:
$ nut get ...
$ nut eval ...
and so on
This change implements a very basic `coco env config` command, that
lets you read, set, or unset configuration values for an environment.
For a single environment, these four usage styles are supported:
# query all values in a given environment <env>:
$ coco env config <env>
# query a single value with key <key> in a given environment <env>:
$ coco env config <env> <key>
# set a single value with key <key> and value <value> in <env>:
$ coco env config <env> <key> <value>
# unset a single value with key <key> in the environment <env>:
$ coco env config <env> <key> --unset
This is a vast subset of pulumi/coconut#113.
This changes a few naming things:
* Rename "husk" to "environment" (`coco env` for short).
* Rename NutPack/NutIL to CocoPack/CocoIL.
* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.
* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.
* Rename the package asset directory from nutpack/ to .coconut/.
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties. This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late). My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.
This resolvespulumi/coconut#107, although we aren't using this
in any resource providers just yet. I'll add a work item now for that...
This change is mostly just a rename of Moniker to URN. It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.
This is a minor step that helps to prepare us for pulumi/coconut#109.
This just orders the output more nicely; previously, "step #n failed"
would come *before* the error detailing the reason. This was a bit
confusing. This change reorders them so the error reads more naturally.
This change tracks which updates triggered a replacement. This enables
better output and diagnostics. For example, we now colorize those
properties differently in the output. This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.