I've tripped over pulumi/coconut#141 a few times now, particularly with
the sort of dynamic payloads required when creating lambdas and API gateways.
This change implements support for computed property initializers.
Our initial implementation of assets was intentionally naive, because
they were limited to single-file assets. However, it turns out that for
real scenarios (like lambdas), we want to support multi-file assets.
In this change, we introduce the concept of an Archive. An archive is
what the term classically means: a collection of files, addressed as one.
For now, we support three kinds: tarfile archives (*.tar), gzip-compressed
tarfile archives (*.tgz, *.tar), and normal zipfile archives (*.zip).
There is a fair bit of library support for manipulating Archives as a
logical collection of Assets. I've gone to great length to avoid making
copies, however, sometimes it is unavoidable (for example, when sizes
are required in order to emit offsets). This is also complicated by the
fact that the AWS libraries often want seekable streams, if not actual
raw contiguous []byte slices.
This reverts back to the old style of having the resource name as its
first parameter in the generated package. Stylistically, this reads a
little nicer, and also ensures we don't need to rewrite all our existing
samples/test cases, etc.
In a few places, an IDL type will be a pointer, but the resulting
RPC code would, ideally, be the naked type. Namely, in both resource
and asset cases, they are required to be pointers in the IDL (because
they are by-pointer by nature), but the marshaled representations need
not be pointers. This change depointerizes such types in the RPC
unless, of course, they are optional in which case pointers still make
sense. This avoids some annoying dereferencing and is the kind of thing
we want to do sooner before seeing widespread use.
This triggers complicates with local development, due to the way
that Glide vendors dependencies. After we move to distinct repos
for the various provider packages, we can go back to doing this.
Now that the IDL types encode named resources as a first class concept,
there is no need to do the dynamic overriding dance in the AWS provider.
I should have included this in my final "banking" of the IDL changes.
This change includes all of the CIDLC generated code for the AWS
package. Just as we vendor and version the generated gRPC code,
we will do so for these files also. This allows building of the
full packages without needing to run any special tools, in addition
to letting us keep track of generated code deltas over time.
This checkin contains all the non-generated package files. This includes
the package metadata, utility functions like utils/instanceMaps, and the
module layouts (these being a candidate for auto-generation down the road).
In addition, the install script has been updated fo reflect the new layout.
The old TypeScript resource definitions may now go away in favor of
the new IDL and associated generated code. After this change, I will
check in the generated code, so that the repo is self-contained.
This change adds some conditional output that depends on whether a
named resource was contained in a file or not. This eliminates some
compiler errors in the generated code when using manually-named
resources.
This is just a minor stylistic change. Instead of letting the AWS
package imports take the good names, prefer to use the generated IDL
names (since they contain the strongly typed data structures).
A property whose type is `interface{}` in the IDL ought to be projected
as a "JSON-like" map, just like it is on the Coconut package side of things,
which means a `map[string]interface{}`.
This change correctly implements package/module resolution in CIDLC.
For now, this only works for intra-package imports, which is sufficient
for now. Eventually we will need to support this (see pulumi/coconut#138).
Unfortunately, this wasn't a great name. The old one stunk, but the
new one was misleading at best. The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them. InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.