The scope chain currently does not include module-scope
vairables, which are instead stored on a module object. For
now, we are capturing this module object along with the
scope chain as part of a Lambda object so that we can use
it when evaluating variable references within a lambda
expression.
Fixes#175.
Introduces a free variable AST visitor, and uses this to limit
the environment exposed by the `serizlizeClosure` intrinsic
to only those variables that are referenced by the lambda body.
Fixes#177.
This change slightly refactors the way resources are created and
implemented. We now have two implementations of the Resource interface:
* `resource` (in resource_value.go), which is a snapshot of a resource's
state. All values are resolved and there is no live reference to any
heap state or objects. This will be used when serializing and/or
deserializing snapshots of deployments.
* `objectResource` (in resource_object.go), which is an implementation
of the Resource interface that wraps an underlying, live runtime object.
This currently introduces no functional difference, as fetching Inputs()
amounts to taking a snapshot of the full state. But this at least
gives us a leg to stand on in making sure that output properties are
read at the right times during evaluation.
This is a fundamental part of pulumi/lumi#90.
This change begins to track objects that are implicated in the
creation of computed values. This ultimately translates into the
resource URNs which are used during dependency analysis and
serialization. This is part of pulumi/lumi#90.
Adds Check implementation for aws.lambda.Permission
resources using AWS-defined regexps.
Fixes bug in lumidl Check wrapper which was dropping
reported check failures (an regen all rpc files).
Add calls to Get into the AWS provider test framework.
This change fixes the serialization of resource properties during
deployment checkpoints. We erroneously serialized empty arrays and
empty maps as though they were nil; instead, we want to keep them
serialized as non-nil empty collections, since the presence of a
value might be semantically meaningful. (We still skip nils.)
Also added some test cases.
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
This changes the resource model to persist input and output properties
distinctly, so that when we diff changes, we only do so on the programmer-
specified input properties. This eliminates problems when the outputs
differ slightly; e.g., when the provider normalizes inputs, adds its own
values, or fails to produce new values that match the inputs.
This change simultaneously makes progress on pulumi/lumi#90, by beginning
tracking the resource objects implicated in a computed property's value.
I believe this fixes both #189 and #198.
This reverts commit 048c35d428.
I have a pending change that fixes this along with a number of
other issues (including pulumi/lumi#198 and pulumi/lumi#187),
however, it's going to take a little longer and I want to unblock.
This fixespulumi/lumi#200.
The @lumi.out decorator ended up not being required after all.
The engine essentially treats all resource properties as potentially
output properties now, given the way we read back the state from
the provider after essential operations.
This is a good thing, because the evaluator currently doesn't
perform dynamic decoration in a way that would be amenable to
a faithful ECMAScript implementation (see pulumi/lumi#128).
This change alters diag.Message to not format strings and, instead,
encourages developers to use the Infof, Errorf, and Warningf varargs
functions. It also tests that arguments are never interepreted as
format strings.
There are a few things that annoyed me about the way our CLI works with
directories when loading packages. For example, `lumi pack info some/pack/dir/`
never worked correctly. This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
The AssumeRolePolicyDocument property returned by the AWS IAM GetRole API returns
a URL-encoded JSON string, so we need to decode this before JSON unmarshalling.
The Code property returned by AWS Lambda GetFunction provides a pre-signed S3 URL,
which changes on each call, and is of a different format to what is provided by the user.
For now, we'll not store this back into the Function object.
Add additional output properties to AWS Lambda Function that are stable values returned
from GetFunction.
Also corrects a gap where some property delete operations were not being correctly reported.
This change enables parallelism for our tests.
It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
This is a minor refactoring to introduce a ProviderHost interface
that is associated with the context and can be swapped in and out for
custom plugin behavior. This is required to write tests that mock
certain aspects, like loading packages from the filesystem.
In theory, this change incurs zero behavioral changes.
This change fixes up a few things so that updates correctly deal
with output properties. This involves a few things:
1) All outputs stored on the pre snapshot need to get propagated
to the post snapshot during planning at various points. This
ensures that the diffing logic doesn't need to be special cased
everywhere, including both the Lumi and the provider sides.
2) Names are changed to "input" properties (using a new `lumi` tag
option, `in`). These are properties that providers are expected
to know nothing about, which we must treat with care during diffs.
3) We read back properties, via Get, after doing an Update just like
we do after performing a Create. This ensures that if an update
has a cascading impact on other properties, it will be detected.
4) Inspecting a change, prior to updating, must be done using the
computed property set instead of the real one. This is to avoid
mutating the resource objects ahead of actually applying a plan,
which would be wrong and misleading.
This change remembers which properties were computed as outputs,
or even just read back as default values, during a deployment. This
information is required in the before/after comparison in order to
perform an intelligent diff that doesn't flag, for example, the absence
of "default" values in the after image as deletions (among other things).
As I was in here, I also cleaned up the way the provider interface
works, dealing with concrete resource types, making it feel a little
richer and less like we're doing in-memory RPC.
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary. The issues here are twofold:
1. Properties need to get unmapped using a JSON-tag-sensitive
marshaler, so that they are cased properly, etc. For that, we
have a new mapper.Unmap function (which is ultra lame -- see
pulumi/lumi#138).
2. We have the reverse problem with respect to resource IDs: on
the send side, we must translate from URNs (which the engine
knows about) and provider IDs (which the provider knows about);
similarly, then, on the receive side, we must translate from
provider IDs back into URNs.
As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors. After this change, we will only flow if
--logflow is passed, e.g. as in
$ lumi --logtostderr --logflow -v=9 deploy ...
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so. As a result, this change takes an initial
whack at implementing Get on all existing AWS resources. The Get API needs
to return a fully populated structure containing all inputs and outputs.
Believe it or not, this is actually part of pulumi/lumi#90.
This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.
This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties. For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
This change introduces the notion of a computed versus an output
property on resources. Technically, output is a subset of computed,
however it is a special kind that we want to treat differently during
the evaluation of a deployment plan. Specifically:
* An output property is any property that is populated by the resource
provider, not code running in the Lumi type system. Because these
values aren't available during planning -- since we have not yet
performed the deployment operations -- they will be latent values in
our runtime and generally missing at the time of a plan. This is no
problem and we just want to avoid marshaling them in inopportune places.
* A computed property, on the other hand, is a different beast altogehter.
Although true one of these is missing a value -- by virtue of the fact
that they too are latent values, bottoming out in some manner on an
output property -- they will appear in serializable input positions.
Not only must we treat them differently during the RPC handshake and
in the resource providers, but we also want to guarantee they are gone
by the time we perform any CRUD operations on a resource. They are
purely a planning-time-only construct.
This change adds a @lumi.out decorator and modifies LumIDL to emit it on
output properties of resource types. There still isn't any runtime
awareness, however, this is an isolated change that will facilitate it.
This change includes approximately 1/3rd of the change necessary
to support output properties, as per pulumi/lumi#90.
In short, the runtime now has a new hidden type, Latent<T>, which
represents a "speculative" value, whose eventual type will be T,
that we can use during evaluation in various ways. Namely,
operations against Latent<T>s generally produce new Latent<U>s.
During planning, any Latent<T>s that end up in resource properties
are transformed into "unknown" property values. An unknown property
value is legal only during planning-time activities, such as Check,
Name, and InspectChange. As a result, those RPC interfaces have
been updated to include lookaside maps indicating which properties
have unknown values. My intent is to add some helper functions to
make dealing with this circumstance more correct-by-construction.
For now, using an unresolved Latent<T> in a conditional will lead
to an error. See pulumi/lumi#67. Speculating beyond these -- by
supporting iterative planning and application -- is something we
want to support eventually, but it makes sense to do that as an
additive change beyond this initial support. That is a missing 1/3.
Finally, the other missing 1/3rd which will happen much sooner
than the rest is restructuing plan application so that it will
correctly observe resolution of Latent<T> values. Right now, the
evaluation happens in one single pass, prior to the application, and
so Latent<T>s never actually get witnessed in a resolved state.
This new package is similar to the AWS Serverless Application Model, offering
higher-level interfaces to manage serverless resources. This will be a candidate
for moving into its own package in the future.
The FunctionX class has been moved into this module, and a new API class has
been added
The API class manages a collection of an API Gateway RestAPI, Stage and Deployment,
based on a collection of routes linked to Functions. On changes to the API specification,
it updates the RestAPI, replaces the Deployment and updates the Stage to point to
the updated Deployment.
This change also reorganizes some of the intrinsics.
When serializing a closure, we serialize the lambda environment into the
aws.lambda.Function Environment property. We need to serialize the lambda
environment in a stable order to ensure that we don't cause Lumi to require
updates to the aws.lambda.Function resource.
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels. I've
found this handy for debugging purposes.
Resolves#137.
This is an initial pass for supporting JavaScript lambda syntax for defining an AWS Lambda Function.
A higher level API for defining AWS Lambda Function objects `aws.lambda.FunctionX` is added which accepts a Lumi lambda as an argument, and uses that lambda to generate the AWS Lambda Function code package.
LumiJS lambdas are serialized as the JavaScript text of the lambda body, along with a serialized version of the environment that is deserialized at runtime and used as the context for the body of the lambda.
Remaining work to further improve support for lambdas is being tracked in #173, #174, #175, and #177.
Unifies the notion of BuiltinFunctions with the existing Intrinsic.
Intrinsic is now only a wrapper type, used to indicate the need to lookup the symbol in the
eval pacakges table of registered intrinsics. It does not carry the invoker function used
to eval the intrinsic.
The eval test in its current form depends on the Lumi standard library
as an external dependency. This means that, in order to run the test,
you must first install the standard library. Not only is this poor
practice, it is also interfering with our ability to get our new CI/CD
system up and running. This change fixes all of that by mocking the one
standard library runtime function that we need in order to hook the intrinsic.
This change adds some machinery to make it easier to write evaluator tests,
and also implements some tests for the lumi:runtime/dynamic:isFunction intrinsic.
During various refactorings pertaining to dynamic vs static invoke, the
intrinsics machinery broke in a few ways:
* MaybeIntrinsic needs to happen elsewhere. Rather than doing it at binding
time, we can do it when populating the properties in the first place,
reusing the same property symbol from one access to the next.
* Last week, I refactored the intrinsics module to also have a dynamic sub-
module. The tokens in the Intrinsics map needed to also get updated.
* As a result of the tokens now containing member parts, we can't use
tokens.Token.Name, since it is assumed to be a simple name; instead, we
need to convert to tokens.ModuleMember, and then fetch Name. To be honest,
this is probably worth revisiting, since I think most people would expect
Name to just work regardless of the Token kind. The assert that it be
Simple might be a little overly aggressive...
This checkin fixes these issues. I'm not pushing to master just yet,
however, until there are some solid tests in here to prevent future breakage.
If a resource has no required properties, there's no need for an
argument object. (In the extreme case, perhaps the resource has
*no* properties.) This is a minor usability thing, but it's far
nicer to write code like
let buck = new Bucket("images");
than it is to write code like
let buck = new Bucket("images", {});
The storing of output properties won't work correctly until pulumi/lumi#90
is completed. The update logic sees properties that weren't supplied by the
developer and thinks this means an update is required; this is easy to fix
but better to just roll into the overall pending change that will land soon.
This change keeps the lumi prefix on our CLI tools.
As @lukehoban pointed out in person, as soon as we do pulumi/coconut#98,
most people (other than compiler authors themselves) won't actually be
typing the commands. And, furthermore, the commands aren't all that bad.
Eventually I assume we'll want something like `lumi-js`, or
`lumi-js-compiler`, so that binaries are discovered dynamically in a way
that is extensible for future languages. We can tackle this during #98.
This change implements property accessors (getters and setters).
The approach is fairly basic, but is heavily inspired by the ECMAScript5
approach of attaching a getter/setter to any property slot (even if we don't
yet fully exploit this capability). The evaluator then needs to track and
utilize the appropriate accessor functions when loading locations.
This change includes CocoJS support and makes a dent in pulumi/coconut#66.
We need a stable object key enumeration order and we might as well leverage
ECMAScript's definition for this. As of ES6, key ordering is specified; see
https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys.
I haven't fully implemented the "numbers come first part" (we can do this as
soon as we have support for Object.keys()), but the chronological part works.
This addresses a bug where we did not reconstruct the correct lexical
environment when restoring a lambda's captured context. Namely, the local
variables scope "drifted" with respect to the evaluation scope slots.
This is an example program that triggered it:
function mkeighty() {
let eighty = 80;
return () => eighty;
}
let group = new ec2.SecurityGroup(..., {
ingress: [ ..., fromPort: mkeighty()(), ... ],
});
I am going to work on turning this into a regression test with my next
checkin; there's a fair bit of test infrastructure "machinery" I need
to put in place, but the time has come to lay the foundation.
This change completes implementing lambdas in the runtime, closing
out pulumi/coconut#62. The change is mostly straightforward, with
most changes coming down to the fact that functions may now exist
that themselves aren't definitions (like class/module members).
The function stub machinery has also been updated to retain the
environment in which a lambda was created, effectively "capturing"
the lexically available variables. Note that this is *not* dynamic
scoping, which will be a problem down the road when/if we want to
support Ruby. My guess is we'll just have a completely different
DynamicallyScopedLambdaExpression opcode.
The previous shape of SequenceExpression only permitted expressions
in the sequence. This is pretty common in most ILs, however, it usually
leads to complicated manual spilling in the event that a statement is needed.
This is often necessary when, for example, a compiler is deeply nested in some
expression production, and then realizes the code expansion requires a
statement (e.g., maybe a new local variable must be declared, etc).
Instead of requiring complicated code-gen, this change permits SequenceExpression
to contain an arbitrary mixture of expression/statement prelude nodes, terminating
with a single, final Expression which yields the actual expression value. The
runtime bears the burden of implementing this which, frankly, is pretty trivial.
I've tripped over pulumi/coconut#141 a few times now, particularly with
the sort of dynamic payloads required when creating lambdas and API gateways.
This change implements support for computed property initializers.
Our initial implementation of assets was intentionally naive, because
they were limited to single-file assets. However, it turns out that for
real scenarios (like lambdas), we want to support multi-file assets.
In this change, we introduce the concept of an Archive. An archive is
what the term classically means: a collection of files, addressed as one.
For now, we support three kinds: tarfile archives (*.tar), gzip-compressed
tarfile archives (*.tgz, *.tar), and normal zipfile archives (*.zip).
There is a fair bit of library support for manipulating Archives as a
logical collection of Assets. I've gone to great length to avoid making
copies, however, sometimes it is unavoidable (for example, when sizes
are required in order to emit offsets). This is also complicated by the
fact that the AWS libraries often want seekable streams, if not actual
raw contiguous []byte slices.
This reverts back to the old style of having the resource name as its
first parameter in the generated package. Stylistically, this reads a
little nicer, and also ensures we don't need to rewrite all our existing
samples/test cases, etc.
In a few places, an IDL type will be a pointer, but the resulting
RPC code would, ideally, be the naked type. Namely, in both resource
and asset cases, they are required to be pointers in the IDL (because
they are by-pointer by nature), but the marshaled representations need
not be pointers. This change depointerizes such types in the RPC
unless, of course, they are optional in which case pointers still make
sense. This avoids some annoying dereferencing and is the kind of thing
we want to do sooner before seeing widespread use.
This change adds some conditional output that depends on whether a
named resource was contained in a file or not. This eliminates some
compiler errors in the generated code when using manually-named
resources.
A property whose type is `interface{}` in the IDL ought to be projected
as a "JSON-like" map, just like it is on the Coconut package side of things,
which means a `map[string]interface{}`.
This change correctly implements package/module resolution in CIDLC.
For now, this only works for intra-package imports, which is sufficient
for now. Eventually we will need to support this (see pulumi/coconut#138).
Unfortunately, this wasn't a great name. The old one stunk, but the
new one was misleading at best. The thing is, this isn't about performing
an update -- it's about NOT doing an update, depending on its return value.
Further, it's not just previewing the changes, it is actively making a
decision on what to do in response to them. InspectUpdate seems to convey
this and I've unified the InspectUpdate and Update routines to take a
ChangeRequest, instead of UpdateRequest, to help imply the desired behavior.
This change rejects non-pointer optional fields in the IDL. Although
there is no reason this couldn't work, technically speaking, it's almost
always certainly a mistake. Better to issue an error about it; in the
future, we could consider making this a warning.
* Use --out-rpc, rather than --out-provider, since rpc/ is a peer to provider/.
* Use strongly typed tokens in more places.
* Append "rpc" to the generated RPC package names to avoid conflicts.
* Change the Check function to return []mapper.FieldError, rather than
mapper.DecodeError, to make the common "no errors" case easier (and to eliminate
boilerplate resulting in needing to conditionally construct a mapper.DecodeError).
* Rename the diffs argument to just diff, matching the existing convention.
* Automatically detect changes to "replaces" properties in the PreviewUpdate
function. This eliminates tons of boilerplate in the providers and handles the
90% common case for resource recreation. It's still possible to override the
PreviewUpdate logic, of course, in case there is more sophisticated recreation
logic necessary than just whether a property changed or not.
* Add some comments on some generated types.
* Generate property constants for the names as they will appear in weakly typed
property bags. Although the new RPC interfaces are almost entirely strongly
typed, in the event that diffs must be inspected, this often devolves into using
maps and so on. It's much nicer to say `if diff.Changed(SecurityGroup_Description)`
than `if diff.Changed("description")` (and catches more errors at compile-time).
* Fix resource ID generation logic to properly fetch the Underlying() type on
named types (this would sometimes miss resources during property analysis, emitting
for example `*VPC` instead of `*resource.ID`).
This change implements the boilerplate RPC stub generation for CIDLC
resource providers. This includes a lot of the marshaling goo required
to bridge between the gRPC plugin interfaces and the strongly typed
resource types and supporting marshalable structures.
This completes the major tasks of implementing CIDLC (pulumi/coconut#133),
and I have most of the AWS package running locally on it. There are
undoubtedly bugs left to shake out, but no planned work items remain.
This is an initial implementation of the Coconut IDL Compiler (CIDLC).
This is described further in
https://github.com/pulumi/coconut/blob/master/docs/design/idl.md,
and the work is tracked by coconut/pulumi#133.
I've been kicking the tires with this locally enough to checkpoint the
current version. There are quite a few loose ends not yet implemented,
most of them minor, with the exception of the RPC stub generation which
I need to flesh out more before committing.
In order to support output properties (pulumi/coconut#90), we need to
modify the Create gRPC interface for resource providers slightly. In
addition to returning the ID, we need to also return any properties
computed by the AWS provider itself. For instance, this includes ARNs
and IDs of various kinds. This change simply propagates the resources
but we don't actually support reading the outputs just yet.
This change renames two provider methods:
* Read becomes Get.
* UpdateImpact becomes PreviewUpdate.
These just read a whole lot nicer than the old names.
This change introduces TryLoadDynamicExpression. This is similar to
the existing LoadDynamicExpression opcode, except that it will return
null in response to a missing member (versus the default of raising
an exception). This is to enable languages like JavaScript to encode
operations properly (which always yields undefined/nulls), while still
catering to languages like Python (which throw exceptions).
The order of operations for stderr/stdout monitoring with plugins
managed to hide some important errors. For example, if something
was written to stderr *before* the port was parsed from stdout, a
very possible scenario if the plugin fails before it has even
properly sarted, then we would silently drop the stderr on the floor
leaving behind no indication of what went wrong. The new ordering
ensures that stderr is never ignored with some minor improvements
in the case that part of the port is parsed from stdout but it
ultimately ends in an error (e.g., if an EOF occurs prematurely).
Right now, we reject dashes in package names. I've hit this a few times and it annoys
me each time. (It would seem to makes sense to permit hyphens in package names, given
that [almost?] every other package manager on Earth does...)
No more! Hyphens welcome!
This change introduces decorator support for CocoJS and the corresponding
IL/AST changes to store them on definition nodes. Nothing consumes these
at the moment, however, I am looking at leveraging this to indicate that
certain program fragments are "code" and should be serialized specially
(in support of Functions-as-lambdas).
This change includes a first basic whack at implementing S3 bucket
objects. It leverages the assets infrastructure put in place in the
last commit, supporting uploads from text, files, or arbitrary URIs.
Most of the interesting object properties remain unsupported for now,
but with this we can upload and delete basic S3 objects, sufficient
for a lot of the lambda functions management we need to implement.
This change introduces the basic concept of assets. It is far from
fully featured, however, it is enough to start adding support for various
storage kinds that require access to I/O-backed data (files, etc).
The challenge is that Coconut is deterministic by design, and so you
cannot simply read a file in an ad-hoc manner and present the bytes to
a resource provider. Instead, we will model "assets" as first class
entities whose data source is described to the system in a more declarative
manner, so that the system and resource providers can manage them.
There are three ways to create an asset at the moment:
1. A constant, in-memory string.
2. A path to a file on the local filesystem.
3. A URI, whose scheme is extensible.
Eventually, we want to support byte blobs, but due to our use of a
"JSON-like" type system, this isn't easily expressible just yet.
The URI scheme is extensible in that file://, http://, and https://
are supported "out of the box", but individual providers are free to
recognize their own schemes and support them. For instance, copying
one S3 object to another will be supported simply by passing a URI
with the s3:// protocol in the usual way.
Many utility functions are yet to be written, but this is a start.
This change implements simple index-based CocoPy subscripts (and
not the more fully featured slicing ones).
Alongside this, we relax a binder-time check that all dynamic
access types must be strings. The eval code already handles
numeric (array) accesses, so we will permit these to flow through.
This permits non-nil `this` objects for dynamically loaded properties
of classes and modules, provided they are the prototype or module
object for the target function, respectively.
This change emits all CocoJS interfaces as records. This allows us to
safely construct instances of them from Python using anonymous properties,
essentially emulating object literals.
For example, given a CocoJs interface defined as such:
interface Foo {
x;
y?;
z;
}
we can easily construct fresh instances using normal JS literals:
let f = { x: 42, z: "bar" };
But, Python doesn't have the equivalent literal syntax, wedging us.
It's now possible to initialize an instance from CocoPy as follows:
let f = Foo(x=42, z="bar")
This leverages the notion of records and primary properties, as
described in our CocoPack/CocoIL design documents.
* Pushing a class scope should permit subclasses (assertion).
* Returning nothing is legal for voids *and* dynamic (the latter was missing).
* The dynamic load readonly lval check is redundant; we check at the site of
assignment and doing it here as well led to two errors per usage.
Due to Python's ... interesting ... scoping rules, we want to avoid
forcing block scopes in certain places. Instead, we will let arbitrary
statements take their place. Of course, this could be a block, but it
very well could be a multi-statement (essentially a block that doesn't
imply a new lexical scope), or anything, really.
* Move checking the validity of a derived class with no constructor
from evlauation to binding (verification). Furthermore, make it
a verification error instead of an assert, and make the checking
complete (i.e., don't just check for the existence of a base class,
also check for the existence of a ctor, recursively).
* Refactor the constructor accessor out of the evaluator and make it
a method, Ctor, on the Function abstraction.
* Add a recursive Module population case to property initialization.
* Only treat the top-most frame in a module's initializer as belonging
to the module scope. This avoids interpreting block scopes within
that initializer as belonging to the module scope. Ultimately, it's
up to the language compiler to decide where to place scopes, but this
gives it more precise control over module scoping.
* Assert against unsupported named/star/keyword args in CocoPy.
If we encounter a dynamic invocation of a prototype object, we will
interpret it as an object allocation. This corresponds to code like
import ec2 from aws
instance = ec2.Instance(...)
where the second line dynamically loads the prototype object for the
Instance class from the module object for the aws/ec2 module, and
invokes it.
This change adds support for naming imports. At the moment, this simply
makes the names dynamically accessible for languages that do dynamic loads
against module objects, versus strongly typed tokens. The basic scheme
is to keep two objects per module: one that contains the globals and its
prototype parent that contains just the exports. This ensures we can
share the same slots while attaining the desired information hiding
(e.g., when handing out an object for dynamic access, we give out this
parent objects, while when loading globals from within a module, we use
the childmost one that contains all private and exported variables).
The old model for imports was to use top-level declarations on the
enclosing module itself. This was a laudible attempt to simplify
matters, but just doesn't work.
For one, the order of initialization doesn't precisely correspond
to the imports as they appear in the source code. This could incur
some weird module initialization problems that lead to differing
behavior between a language and its Coconut variant.
But more pressing as we work on CocoPy support, it doesn't give
us an opportunity to dynamically bind names in a correct way. For
example, "import aws" now needs to actually translate into a variable
declaration and assignment of sorts. Furthermore, that variable name
should be visible in the environment block in which it occurs.
This change switches imports to act like statements. For the most
part this doesn't change much compared to the old model. The common
pattern of declaring imports at the top of a file will translate to
the imports happening at the top of the module's initializer. This
has the effect of initializing the transitive closure just as it
happened previously. But it enables alternative models, like imports
inside of functions, and -- per the above -- dynamic name binding.
Previously, it was an error to look up a local that didn't exist.
Now it is common, thanks to dynamic lookups. A few code-paths didn't
previously handle this adequately; now they do.
This rearranges the way dynamic loads work a bit. Previously, they¬
required an object, and did a dynamic lookup in the object's property¬
map. For real dynamic loads -- of the kind Python uses, obviously,¬
but also ECMAScript -- we need to search the "environment".
This change searches the environment by looking first in the lexical¬
scope in the current function. If a variable exists, we will use it.¬
If that misses, we then look in the module scope. If a variable exists¬
there, we will use it. Otherwise, if the variable is used in a non-lval
position, an dynamic error will be raised ("name not declared"). If
an lval, however, we will lazily allocate a slot for it.
Note that Python doesn't use block scoping in the same way that most
languages do. This behavior is simply achieved by Python not emitting
any lexically scoped blocks other than at the function level.
This doesn't perfectly achieve the scoping behavior, because we don't
yet bind every name in a way that they can be dynamically discovered.
The two obvious cases are class names and import names. Those will be
covered in a subsequent commit.
Also note that we are getting lucky here that class static/instance
variables aren't accessible in Python or ECMAScript "ambiently" like
they are in some languages (e.g., C#, Java); as a result, we don't need
to introduce a class scope in the dynamic lookup. Some day, when we
want to support such languages, we'll need to think about how to let
languages control the environment probe order; for instance, perhaps
the LoadDynamicExpression node can have an "environment" property.
This changes the CocoPy default load type from static to dynamic,
since we don't have enough information at compile-time to emit
fully qualified tokens. Previously, Coconut only supported dynamic
loads with object targets, however we will need to support the full
scope search (class, module, global, etc).
This change makes node optional in the lookupBasicType function, which is
necessary in cases where diagnostics information isn't available (such as
with configuration application). This eliminates an assert when you fat-
finger a configuration key. The associated message was missing apostrophes.
This change eliminates the need to constantly type in the environment
name when performing major commands like configuration, planning, and
deployment. It's probably due to my age, however, I keep fat-fingering
simple commands in front of investors and I am embarrassed!
In the new model, there is a notion of a "current environment", and
I have modeled it kinda sorta just like Git's notion of "current branch."
By default, the current environment is set when you `init` something.
Otherwise, there is the `coco env select <env>` command to change it.
(Running this command w/out a new <env> will show you the current one.)
The major commands `config`, `plan`, `deploy`, and `destroy` will prefer
to use the current environment, unless it is overridden by using the
--env flag. All of the `coco env <cmd> <env>` commands still require the
explicit passing of an environment which seems reasonable since they are,
after all, about manipulating environments.
As part of this, I've overhauled the aging workspace settings cruft,
which had fallen into disrepair since the initial prototype.
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one. As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
This change adds the ability to specify analyzers in two ways:
1) By listing them in the project file, for example:
analyzers:
- acmecorp/security
- acmecorp/gitflow
2) By explicitly listing them on the CLI, as a "one off":
$ coco deploy <env> \
--analyzer=acmecorp/security \
--analyzer=acmecorp/gitflow
This closes out pulumi/coconut#119.
This change introduces the basic requirements for analyzers, as per
pulumi/coconut#119. In particular, an analyzer can implement either,
or both, of the RPC methods, Analyze and AnalyzeResource. The former
is meant to check an overall deployment (e.g., to ensure it has been
signed off on) and the latter is to check individual resources (e.g.,
to ensure properties of them are correct, such as checking style,
security, etc. rules). These run simultaneous to overall checking.
Analyzers are loaded as plugins just like providers are. The difference
is mainly in their naming ("analyzer-" prefix, rather than "resource-"),
and the RPC methods that they support.
This isn't 100% functional since we need a way to specify at the CLI
that a particular analyzer should be run, in addition to a way of
recording which analyzers certain projects should use in their manifests.
This changes a few naming things:
* Rename "husk" to "environment" (`coco env` for short).
* Rename NutPack/NutIL to CocoPack/CocoIL.
* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.
* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.
* Rename the package asset directory from nutpack/ to .coconut/.
This change adds a new Check RPC method on the provider interface,
permitting resource providers to perform arbitrary verification on
the values of properties. This is useful for validating things
that might be difficult to express in the type system, and it runs
before *any* modifications are run (so failures can be caight early
before it's too late). My favorite motivating example is verifying
that an AWS EC2 instance's AMI is available within the target region.
This resolvespulumi/coconut#107, although we aren't using this
in any resource providers just yet. I'll add a work item now for that...
This change is mostly just a rename of Moniker to URN. It does also
prefix resource URNs to have a standard URN namespace; in other words,
"urn🥥<name>", where <name> is the same as the prior Moniker.
This is a minor step that helps to prepare us for pulumi/coconut#109.
This change adds a new resource.NewUniqueHex API, that simply generates
a unique hex string with the given prefix, with a specific count of
random bytes, and optionally capped to a maximum length.
This is used in the AWS SecurityGroup resource provider to avoid name
collisions, which is especially important during replacements (otherwise
we cannot possibly create a new instance before deleting the old one).
This resolvespulumi/coconut#108.
This change tracks which updates triggered a replacement. This enables
better output and diagnostics. For example, we now colorize those
properties differently in the output. This makes it easier to diagnose
why an unexpected resource might be getting deleted and recreated.
This change, part of pulumi/coconut#105, rearranges support for
resource replacement. The old model didn't properly account for
the cascading updates and possible replacement of dependencies.
Namely, we need to model a replacement as a creation followed by
a deletion, inserted into the overall DAG correctly so that any
resources that must be updated are updated after the creation but
prior to the deletion. This is done by inserting *three* nodes
into the graph per replacement: a physical creation step, a
physical deletion step, and a logical replacement step. The logical
step simply makes it nicer in the output (the plan output shows
a single "replacement" rather than the fine-grained outputs, unless
they are requested with --show-replace-steps). It also makes it
easier to fold all of the edges into a single linchpin node.
As part of this, the update step no longer gets to choose whether
to recreate the resource. Instead, the engine takes care of
orchestrating the replacement through actual create and delete calls.
This change introduces a new RPC function to the provider interface;
in pseudo-code:
UpdateImpact(id ID, t Type, olds PropertyMap, news PropertyMap)
(bool, PropertyMap, error)
Essentially, during the planning phase, we will consult each provider
about the nature of a proposed update. This update includes a set of
old properties and the new ones and, if the resource provider will need
to replace the property as a result of the update, it will return true;
in general, the PropertyMap will eventually contain a list of all
properties that will be modified as a result of the operation (see below).
The planning phase reacts to this by propagating the change to dependent
resources, so that they know that the ID will change (and so that they
can recalculate their own state accordingly, possibly leading to a ripple
effect). This ensures the overall DAG / schedule is ordered correctly.
This change is most of pulumi/coconut#105. The only missing piece
is to generalize replacing the "ID" property with replacing arbitrary
properties; there are hooks in here for this, but until pulumi/coconut#90
is addressed, it doesn't make sense to make much progress on this.
For cerain update shapes, we will need to recover an ID of an already-deleted,
or soon-to-be-deleted resource; in those cases, we have a moniker but want to
serialize an ID. This change implements support for remembering/recovering them.
This change fixes a couple issues that prevented restarting a
deployment after partial failure; this was due to the fact that
unchanged resources didn't propagate IDs from old to new. This
is remedied by making unchanged a map from new to old, and making
ID propagation the first thing plan application does.
This change does a few things:
* First and foremost, it tracks configuration variables that are
initialized, and optionally prints them out as part of the
prelude/header (based on --show-config), both in a dry-run (plan)
and in an actual deployment (apply).
* It tidies up some of the colorization and messages, and includes
nice banners like "Deploying changes:", etc.
* Fix an assertion.
* Issue a new error
"One or more errors occurred while applying X's configuration"
just to make it easier to distinguish configuration-specific
failures from ordinary ones.
* Change config keys to tokens.Token, not tokens.ModuleMember,
since it is legal for keys to represent class members (statics).
This change adds support for configuration maps.
This is a new feature that permits initialization code to come from markup,
after compilation, but before evaluation. There is nothing special with this
code as it could have been authored by a user. But it offers a convenient
way to specialize configuration settings per target husk, without needing
to write code to specialize each of those husks (which is needlessly complex).
For example, let's say we want to have two husks, one in AWS's us-west-1
region, and the other in us-east-2. From the same source package, we can
just create two husks, let's say "prod-west" and "prod-east":
prod-west.json:
{
"husk": "prod-west",
"config": {
"aws:config:region": "us-west-1"
}
}
prod-east.json:
{
"husk": "prod-east",
"config": {
"aws:config:region": "us-east-2"
}
}
Now when we evaluate these packages, they will automatically poke the
right configuration variables in the AWS package *before* actually
evaluating the CocoJS package contents. As a result, the static variable
"region" in the "aws:config" package will have the desired value.
This is obviously fairly general purpose, but will allow us to experiment
with different schemes and patterns. Also, I need to whip up support
for secrets, but that is a task for another day (perhaps tomorrow).
* Delete husks if err == nil, not err != nil.
* Swizzle the formatting padding on array elements so that the
diff modifier + or - binds more tightly to the [N] part.
* Print the un-doubly-indented padding for array element headers.
* Add some additional logging to step application (it helped).
* Remember unchanged resources even when glogging is off.
This change adds a --show-sames flag to `coco husk deploy`. This is
useful as I'm working on updates, to show what resources haven't changed
during a deployment.
This change checkpoints deployments properly. That is, even in the
face of partial failure, we should keep the huskfile up to date. This
accomplishes that by tracking the state during plan application.
There are still ways in which this can go wrong, however. Please see
pulumi/coconut#101 for additional thoughts on what we might do here
in the future to make checkpointing more robust in the face of failure.
This command is handy for development, so I whipped up a quick implementation.
All it does is print all known husks with their associated deployment time
and resource count (if any, or "n/a" for initialized husks with no deployments).
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
As part of pulumi/coconut#94 -- adding targeting capabilities -- I've
decided to (yet again) reorganize the deployment commands a bit. This
makes targets ("husks") more of a first class thing.
Namely, you must first initialize a husk before using it:
$ coco husk init staging
Coconut husk 'staging' initialized; ready for deployments
Eventually, this is when you will be given a choice to configure it.
Afterwards, you can perform deployments. The first one is like a create,
but subsequent ones just figure out the right thing to do and do it:
$ ... make some changes ...
$ coco husk deploy staging
... standard deployment progress spew ...
Finally, should you want to teardown an entire environment:
$ coco husk destroy staging
... standard deletion progress spew for all resources ...
Coconut husk 'staging' has been destroyed!
This change partially implements pulumi/coconut#94, by adding the
ability to name targets during creation and reuse those names during
deletion and update. This simplifies the management of deployment
records, checkpoints, and snapshots.
I've opted to call these things "husks" (perhaps going overboard with
joy after our recent renaming). The basic idea is that for any
executable Nut that will be deployed, you have a nutpack/ directory
whose layout looks roughly as follows:
nutpack/
bin/
Nutpack.json
... any other compiled artifacts ...
husks/
... one snapshot per husk ...
For example, if we had a stage and prod husk, we would have:
nutpack/
bin/...
husks/
prod.json
stage.json
In the prod.json and stage.json files, we'd have the most recent
deployment record for that environment. These would presumably get
checked in and versioned along with the overall Nut, so that we
can use Git history for rollbacks, etc.
The create, update, and delete commands look in the right place for
these files automatically, so you don't need to manually supply them.
* Eliminate some superfluous "\n"s.
* Remove the redundant properties stored on AWS resources.
* Compute array diff lengths properly (+1).
* Display object property changes from null to non-null as
adds; and from non-null to null as deletes.
* Fix a boolean expression from ||s to &&s. (Bone-headed).
* Print a pretty message if the plan has nothing to do:
"info: nothing to do -- resources are up to date"
* Add an extra validation step after reading in a snapshot,
so that we detect more errors sooner. For example, I've
fed in the wrong file several times, and it just chugs
along as though it were actually a snapshot.
* Skip printing nulls in most plan outputs. These just
clutter up the output.
This change detects duplicate object names (monikers) and issues a nice
error message with source context include. For example:
index.ts(260,22): error MU2006: Duplicate objects with the same name:
prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group
The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:
let group1 = new aws.ec2.SecurityGroup("group", { ... });
let group2 = new aws.ec2.SecurityGroup("group", { ... });
^^^^^^^^^^^^^^^^^^^^^^^^^
This change overhauls the way we do object monikers. The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions. The new approach mixes some amount of "automatic scoping"
plus some "explicit naming." Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.
Each moniker has four parts:
<Namespace>::<AllocModule>::<Type>::<Name>
wherein each element is the following:
<Namespace> The namespace being deployed into
<AllocModule> The module in which the object was allocated
<Type> The type of the resource
<Name> The assigned name of the resource
The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.
The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created. In the future, we may wish to
extend this to also track the module under evaluation. (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)
The <Name> warrants more discussion. The resource provider is consulted
via a new gRPC method, Name, that fetches the name. How the provider
does this is entirely up to it. For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).
This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
This change is a first whack at implementing updates.
Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order. For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies). These are just special cases of the more
general idea of performing DAG operations in dependency order.
Updates must work in terms of this more general notion. For example:
* It is an error to delete a resource while another refers to it; thus,
resources are deleted after deleting dependents, or after updating
dependent properties that reference the resource to new values.
* It is an error to depend on a create a resource before it is created;
thus, resources must be created before dependents are created, and/or
before updates to existing resource properties that would cause them
to refer to the new resource.
Of course, all of this is tangled up in a graph of dependencies. As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.
To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).