In our existing code, we only use the input state for old and new
properties. This is incorrect and I'm astonished we've been flying
blind for so long here. Some resources require the output properties
from the prior operation in order to perform updates. Interestingly,
we did correclty use the full synthesized state during deletes.
I ran into this with the AWS Cloudfront Distribution resource,
which requires the etag from the prior operation in order to
successfully apply any subsequent operations.
We were previously calling configure on each package once per time it was mentioned in the config. We only need to call it once ever as we pass the full bag of relevent config through on that one call.
It's legal and possible for undefined properties to show up in
objects, since that's an idiomatic JavaScript way of initializing
missing properties. Instead of failing for these during deployment,
we should simply skip marshaling them to Terraform and let it do
its thing as usual. This came up during our customer workload.
Previously, we stored configuration information in the Pulumi.yaml
file. This was a change from the old model where configuration was
stored in a special section of the checkpoint file.
While doing things this way has some upsides with being able to flow
configuration changes with your source code (e.g. fixed values for a
production stack that version with the code) it caused some friction
for the local development scinerio. In this case, setting
configuration values would pend changes to Pulumi.yaml and if you
didn't want to publish these changes, you'd have to remember to remove
them before commiting. It also was problematic for our examples, where
it was not clear if we wanted to actually include values like
`aws:config:region` in our samples. Finally, we found that for our
own pulumi service, we'd have values that would differ across each
individual dev stack, and publishing these values to a global
Pulumi.yaml file would just be adding noise to things.
We now adopt a hybrid model, where by default configuration is stored
locally, in the workspace's settings per project. A new flag `--save`
tests commands to actual operate on the configuration information
stored in Pulumi.yaml.
With the following change, we have have four "slots" configuration
values can end up in:
1. In the Pulumi.yaml file, applies to all stacks
2. In the Pulumi.yaml file, applied to a specific stack
3. In the local workspace.json file, applied to all stacks
4. In the local workspace.json file, applied to a specific stack
When computing the configuration information for a stack, we apply
configuration in the above order, overriding values as we go
along.
We also invert the default behavior of the `pulumi config` commands so
they operate on a specific stack (i.e. how they did before
e3610989). If you want to apply configuration to all stacks, `--all`
can be passed to any configuration command.
This change introduces an abstraction for a `backend` which manages
the implementation of some CLI commands. As part of defining the
interface, we introduce a new local backend implementation that just
uses data local to the machine.
This will let us share argument parsing and some display information
between the local case and the pulumi.com case in the CLI. We can
continue to refine this interface over time (e.g. today we have the
implementation of the Destroy/Update/Preview actually writing output
but instead they should be returning strongly typed data that the CLI
knows how to display and is unified across Pulumi.com deploys and
local deploys).
But this is a good first step.
We have some lint debt (around exported functions being
underdocumented) so we have a custom target that ignores some lint
warnings. However, that target (which is the only lint target that
needs to be clean to check in) is called `lint_quiet` instead of
`lint` which is our normal linting target.
Rename the target so your fingers can learn that `make lint` is always
the right way to run linting before checkin.
Add `Name` (Pulumi project name) and `Runtime` (Pulumi runtime, e.g. "nodejs") properties to `UpdateProgramRequest`; as they are now required.
The long story is that as part of the PPC enabling destroy operations, data that was previously obtained from `Pulumi.yaml` is now required as part of the update request. This PR simply provides that data from the CLI.
This is the final step of a line of breaking changes.
pulumi-ppc 8ddce15b29
pulumi-service 8941431d57 (diff-05a07bc54b30a35b10afe9674747fe53)
PR #501 regressed this test due to a change in its error message. We
should consider updating the rest of the `stack` commands to use a
similar message (rather than the "file not found" they currently emit).
- `go build` handles appending .exe to the built binary, so we need not do
it ourselves. In fact, when we did we generated a binary called
`pulumi.exe.exe` which is not what we wanted.
- Remove the development versions of the langhost and dynamic provider,
from the `<root>/node_modules/pulumi` folder. The `dist` version gets
copied into bin.
- Add the dummy_argument workaround to the dist version of the langhost.
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process
Fixes#504
This PR removes three command line parameters from Cloud-enabled Pulumi commands (`update` and `stack init`). Previously we required users to pass in `--organization`, `--repository`, and `--project`. But with the recent "Pulumi repository" changes, we can now get that from the Pulumi workspace. And the project name from the `Pulumi.yaml`.
This PR also fixes a bugs that block the Cloud-enabled CLI path: `update` was getting the stack name via `explicitOrCurrent`, but that fails if the current stack (e.g. the one just initialized in the cloud) doesn't exist on the local disk.
As for better handling of "current stack" and and Cloud-enabled commands, https://github.com/pulumi/pulumi/pull/493 and the PR to enable `stack select`, `stack rm`, and `stack ls` do a better job of handling situations like this.
Previously, we would CD into the directory of the launch script and
invoke node.exe from there. We did this because the require statement
was a relative path and so we needed to be in the langhost directory for
things to work.
This behavior differs from how we launch things on *nix and was causing
some issues with relative paths, since the CWD would now differ between
Windows and *nix. So instead we construct a full path for our require
statements and don't cd anymore. The only tricky thing is to change path
separators from \ to / when computing the path to the root folder we
should do our require from.
This PR adds integration tests for exercising `pulumi init` and the `pulumi stack *` commands. The only functional change is merging in https://github.com/pulumi/pulumi/pull/492 , which I found while writing the tests and (of course 😁 ) wrote a regression for.
To do this I introduce a new test driver called `PulumiProgram`. This is different from the one found in the `testing/integration`package in that it doesn't try to prescribe a workflow. It really just deals in executing commands, and confirming strings are in the output.
While it doesn't hurt to have more tests for `pulumi`, my motivation here was so that I could reuse these to ensure I keep the same behavior for my pending PR that implements Cloud-enabled variants of some of these commands.
This change remembers that we failed due to an uncaught exception,
and defers the process.exit(1) until we actually reach the process's
exit event. This ensures that we drain the message queue before
exiting, which ensures that outbound messages actually reach their
destination.
- When looking for a `.pulumi` folder to reuse, only consider ones
that have a `settings.json` in them.
- Make the error message when there is no repository a little more
informative by telling someone to run `pulumi init`