Since resource registration is an async operation, we had been using
Task<T> to represent output properties. Awaiting the output property
would give you the ground value of the output once the registration
completed.
However, there are cases where we may not have a known value. When
previewing, the resource provider may not be able to give back known
values for all output properties (for example, consider the case where
you are creating a new resource. In this case, the Id property is
unknown, because the resource was not created at all). To handle this
case, we introduce Output<T>. Unlike Task<T>, Output<T> tracks if the
value it represents is "known". In addition, it does not allow you to
observe the underlying value directly. Instead, there is a function
Apply which allows accessing and transforming the value. However, if
the underlying value is unknown (e.g. during preview), Apply will not
allow you to observe this state.
Note that during previews, some output properties may be present, if
the resource provider knows the update would not change them. For
example, if you run and update and then preview without changing your
code, the cloud provider will known many properties are actually
stable. This means applies may run during previews.
Instead of using .ContinueWith all over the place, register a single
ContinueWith call when we begin resource registration. When it
completes, we then call a callback on each resource that uses a
TaskCompletionSource to complete a task that represent the output
property.
This reduces the number of continuations we'll have floating around
and makes the code a little easier to grok.
This feels more natrual, even if the code you end up writing to
describe your application has a little more ceremony. We also don't
have to worry about all the crazy things we likely would have had to
worry about if we continued down the CSI path, or some path where we
had a middle stage that reflection loaded an actual binary and invoked
into it (I had a fear in the back of my mind that at some point we'd
actually have to start using AssemblyLoadContext).
This model is pretty easy to internalize, as well.
The major change from the language plugin point of view is that
instead of passing command line arguments to the executor, we just set
a bunch of `PULUMI_XXX` env-vars, which parts of our system know how
to use.
Next up, Output tracking.
Introduce Pulum.Input<T> which is either a T or a Task<T> (in the
future, it could also be an Output<T>).
Code (by hand) something that looks like what we might end up using
for `tfgen'`d projections of Bucket and Bucket content. A bunch of
stuff is missing there, but it is sufficent to change the example code
to look a little more in line with what you'd actually end up writing.
The next thing I plan on doing is moving away from CSI, in favor of an
actual .NET binary, so I have nice IntelliSense in the example.
Small changes to deal with some changes in Pulumi itself, as well as
writing down all the stuff I had forgotten between now and March, and
add a small example.
Stop cloning pulumi/home. This doesn't work in Travis because public
repositories can not have private SSH keys, which we'd need to clone
this repository. All the scripts we consume from there are now in
pulumi/scripts and so we'll just consume them from there.
This change includes the Python and Golang language hosts in the Windows
SDK. As part of this change, I had to adjust how we launched the second
stage of the language host, since we can't depend on the shebang, so now
we invoke `python` passing the executor and then the arguments.
Fixes#1509
This allows us to delete the one off export/import test, which is nice
because it failed to run when PULUMI_ACCESS_TOKEN was not set in the
environment (due to an interaction between `pulumi login` and the
bare-bones integration test framework)
Because we run our golang integration tests "in tree" (due to
the need to be under $GOPATH and have a vendor folder around), the
"command-output" folder was getting left behind, dirtying the worktree
after building.
This change does two things:
1. On a succecssful run, remove the folder.
2. Ignore the folder via .gitignore (this way if a test fails and you
do `git add .` you don't end up commiting this folder).
Previously, we published builds to rel.pulumi.com and only put actual
released builds (eg. rc's and final builds) on get.pulumi.com. We
should just publish all of the SDK builds to get.pulumi.com.
This also makes a slight tweak to the filename of the package we
upload (we took the switch over to get.pulumi.com to make this change
and now that we are uploading automatically, we need to encode this
change instead of doing it by hand).
All scripts that are generally useful across all builds have been moved
into `pulumi/scripts`. These changes clone that repository and retarget
the various scripts to their new location.
This change makes it a little easier to do the style of highlighting
we are doing now with the login prompt, cleaning up some of the
padding calculations that were otherwise complicated due to ANSI
escape sequences.
This change makes our login prompt a little "friendlier", especially
important since this will be the first thing a new user sees.
The new message is:
$ pulumi new
We need your Pulumi account to identify you.
Enter your access token from https://app.pulumi.com/account
or hit <ENTER> to log in using your browser :
The ZIP format started with MS-DOS dates, which start in 1980. Other dates
have been layered on, but the ZIP file handler used by Azure websites still
relies on the MS-DOS dates.
Using the Unix epoch here (1970) results in ZIP entries that (e.g.)
OSX `unzip` sees as 12-31-1969 (timezones) but Azure websites sees as
01/01/2098.
Due to the way GOPATH and vendoring works, copying Go tests out to a
random temp directory simply will not work. This is largely a holdover
to the days of when .pulumi/ directories would pollute the directory,
but is also done for languages like Node.js whose preparatory and build
steps also pollute the working directory (with bin/ and node_modules/
directories). Go does not have this problem, so we can safely skip.
Previously, we would include information about what git commit a build
came from in the "local" portion of the PEP-440 version. This was a
problem because PyPI does not allow packages to be upload to the
registry if they contain local parts.
So, for now, we'll just never put in the git commit information in the
generate version. We'll continue to add a dirty tag in the local
part. This will be prevent us from publishing dirty builds to PyPI,
but that's in line with what we want.
Today we defaulted our tests to create stacks in the `pulumi`
organization. We did this because our tests run with `pulumi-bot` and
we'd rather create the stacks in our shared organization, so any
Pulumi developer can see them.
Of course, as we prepare to have folks outside of the Pulumi
organization write and run tests, this has now become a bad default.
Remove the ability to explicitly set an owner in
ProgramTestOptions (since that would more or less only lead to pain
going forward) and default to just creating the stacks in whatever
account is currently logged in. In CI, we'll set a new environment
variable "PULUMI_TEST_OWNER" which controls the owner of the stacks,
which we'll set to `pulumi`.
Impact to day to day developers is during test runs locally you'll see
stacks in your list of stacks. If any of the tests fail to clean up,
you'll see these lingering stacks (but you can go clean them up).