This includes a few changes:
* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.
* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.
* The CLI is renamed from lumi to pulumi.
This makes a few tweaks to get the integration tests passing:
* Add `runtime: nodejs` to the minimal example's `Lumi.yaml` file.
* Remove usage of `@lumi/lumirt { printf }` and just use `console.log`.
* Remove calls to `lumijs` in the integration test framework and
the minimal example's package.json. Instead, we just run
`yarn run build`, which itself internally just invokes `tsc`.
* Add package validation logic and eliminate the pkg/compiler/metadata
library, in favor of the simpler code in pkg/engine.
* Simplify the Node.js langhost plugin CLI, and simply take an
argument rather than requiring required and optional --flags.
* Use a default path of "." if the program path isn't provided. This
is a legal scenario if you've passed a pwd and just want to load
the package's default module ("./index.js" or whatever main says).
* Add an executable script, lumi-langhost-nodejs, that fires up the
`bin/cmd/langhost/index.js` file to serve the Node.js language plugin.
We are renaming Lumi to Pulumi Fabric. This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
This change enables parallelism for our tests.
It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
This change adds the ability to specify analyzers in two ways:
1) By listing them in the project file, for example:
analyzers:
- acmecorp/security
- acmecorp/gitflow
2) By explicitly listing them on the CLI, as a "one off":
$ coco deploy <env> \
--analyzer=acmecorp/security \
--analyzer=acmecorp/gitflow
This closes out pulumi/coconut#119.
This changes a few naming things:
* Rename "husk" to "environment" (`coco env` for short).
* Rename NutPack/NutIL to CocoPack/CocoIL.
* Rename the primary Nut.yaml/json project file to Coconut.yaml/json.
* Rename the compiled Nutpack.yaml/json file to Cocopack.yaml/json.
* Rename the package asset directory from nutpack/ to .coconut/.
This change generalizes the support for default modules (added as
part of marapongo/mu#57), to use an "alias" table. This is just a
table of alternative module names to the real defining module name.
The default module is now just represented as a mapping from the
special default module token, ".default", to the actual defining
module. This generalization was necessary to support Node.js-style
alternative names for modules, like "lib" mapping to "lib/index".
This adds support for sugared "*" semvers, which is a shortcut for
">=0.0.0" (in other words, any version matches). This is a minor part
of marapongo/mu#18. I'm just doing it now as a convenience for dev
scenarios, as we start to do real packages and dependencies between them.
The old method of specifying a default module for a package was using
a bit on the module definition itself. The new method is to specify the
module name as part of the package definition itself.
The new approach makes more sense for a couple reasons. 1) it doesn't
make sense to have more than one "default" for a given package, and yet
the old model didn't prevent this; the new one prevents it by construction.
2) The defaultness of a module is really an aspect of the package, not the
module, who is generally unaware of the package containing it.
The other reason is that I'm auditing the code for nondeterministic map
enumerations, and this came up, which simply pushed me over the edge.
Now that we have introduced a full blown token map -- new as of just
a few changes ago -- we can start using it for all of our symbol binding.
This also addresses some order-dependent issues, like intra-module
references looking up symbols that have been registered in the token map
but not necessarily stored in the relevant parent symbols just yet.
Plus, frankly, it's much simpler and uses a hashmap lookup instead of
a fairly complex recursive tree walk.
I've kept the tree walk case, however, to improve diagnostics upon
failure. This allows us to tell developers, for example, that the reason
a binding failed was due to a missing package.
This change rearranges the old way we dealt with URLs. In the old system,
virtually every reference to an element, including types, was fully qualified
with a possible URL-like reference. (The old pkg/tokens/Ref type.) In the
new model, only dependency references are URL-like. All maps and references
within the MuPack/MuIL format are token and name based, using the new
pkg/tokens/Token and pkg/tokens/Name family of related types.
As such, this change renames Ref to PackageURLString, and RefParts to
PackageURL. (The convenient name is given to the thing with "more" structure,
since we prefer to deal with structured types and not strings.) It moves
out of the pkg/tokens package and into pkg/pack, since it is exclusively
there to support package resolution. Similarly, the Version, VersionSpec,
and related types move out of pkg/tokens and into pkg/pack.
This change cleans up the various binder, package, and workspace logic.
Most of these changes are a natural fallout of this overall restructuring,
although in a few places we remained sloppy about the difference between
Token, Name, and URL. Now the type system supports these distinctions and
forces us to be more methodical about any conversions that take place.
I was sloppy in my use of names versus tokens in the original AST.
Now that we're actually binding things to concrete symbols, etc., we
need to be more precise. In particular, names are just identifiers
that must be "interpreted" in a given lexical context for them to
make any sense; whereas, tokens stand alone and can be resolved without
context other than the set of imported packages, modules, and overall
module structure. As such, names are much simpler than tokens.
As explained in the comments, tokens.Names are simple identifiers:
Name = [A-Za-z_][A-Za-z0-9_]*
and tokens.QNames are fully qualified identifiers delimited by "/":
QName = [ <Name> "/" ]* <Name>
The legal grammar for a token depends on the subset of symbols that
token is meant to represent. However, the most general case, that
accepts all specializations of tokens, is roughly as follows:
Token = <Name> |
<PackageName>
[ ":" <ModuleName>
[ "/" <ModuleMemberName>
[ "." <Class MemberName> ]
]
]
where:
PackageName = <QName>
ModuleName = <QName>
ModuleMemberName = <Name>
ClassMemberName = <Name>
Please refer to the comments in pkg/tokens/tokens.go for more details.
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler. A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.
The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:
* Split up the notion of symbols and tokens, resulting in:
- pkg/symbols for true compiler symbols (bound nodes)
- pkg/tokens for name-based tokens, identifiers, constants
* Several packages move underneath pkg/compiler:
- pkg/ast becomes pkg/compiler/ast
- pkg/errors becomes pkg/compiler/errors
- pkg/symbols becomes pkg/compiler/symbols
* pkg/ast/... becomes pkg/compiler/legacy/ast/...
* pkg/pack/ast becomes pkg/compiler/ast.
* pkg/options goes away, merged back into pkg/compiler.
* All binding functionality moves underneath a dedicated
package, pkg/compiler/binder. The legacy.go file contains
cruft that will eventually go away, while the other files
represent a halfway point between new and old, but are
expected to stay roughly in the current shape.
* All parsing functionality is moved underneath a new
pkg/compiler/metadata namespace, and we adopt new terminology
"metadata reading" since real parsing happens in the MetaMu
compilers. Hence, Parser has become metadata.Reader.
* In general phases of the compiler no longer share access to
the actual compiler.Compiler object. Instead, shared state is
moved to the core.Context object underneath pkg/compiler/core.
* Dependency resolution during binding has been rewritten to
the new model, including stashing bound package symbols in the
context object, and detecting import cycles.
* Compiler construction does not take a workspace object. Instead,
creation of a workspace is entirely hidden inside of the compiler's
constructor logic.
* There are three Compile* functions on the Compiler interface, to
support different styles of invoking compilation: Compile() auto-
detects a Mu package, based on the workspace; CompilePath(string)
loads the target as a Mu package and compiles it, regardless of
the workspace settings; and, CompilePackage(*pack.Package) will
compile a pre-loaded package AST, again regardless of workspace.
* Delete the _fe, _sema, and parsetree phases. They are no longer
relevant and the functionality is largely subsumed by the above.
...and so very much more. I'm surprised I ever got this to compile again!
This change introduces a new visitation API to the new MuIL AST.
The ast.Walk API takes an ast.Visitor implementation and walks the
tree in depth-first order, invoking the visitor along the way.
The visitor gets to choose whether to continue visitation (by returning
a non-nil visitor object), or to stop it (by returning nil). The
visitation will proceed with that returned visitor, so that a visitor
can "swap out" the visitor used for child nodes if needed.
At the end, the PostVisit function is called, for any clean up logic.
Finally, the ast.Inspector type is available as a simple way of consing
up visitors simply using a function that returns a bool indicating
whether visitation should continue.
This change tracks the set of imported modules in the ast.Module
structure. Although we can in principle gather up all imports simply
by looking through the fully qualified names, that's slightly hokey;
and furthermore, to properly initialize all modules, we need to know
in which order to do it (in case there are dependencies). I briefly
considered leaving it up to MetaMu compilers to inject the module
initialization calls explicitly -- for infinite flexibility and perhaps
greater compatibility with the source languages -- however, I'd much
prefer that all Mu code use a consistent module initialization story.
Therefore, MetaMus declare the module imports, in order, and we will
evaluate the initializers accordingly.
This change makes considerable progress on the `mu describe` command;
the only thing remaining to be implemented now is full IL printing. It
now prints the full package/module structure.
For example, to print the set of exports from our scenarios/point test:
$ mujs tools/mujs/tests/output/scenarios/point/ | mu describe - -e
package "scenarios/point" {
dependencies []
module "index" {
class "Point" [public] {
method "add": (other: any): any
property "x" [public, readonly]: number
property "y" [public, readonly]: number
method ".ctor": (x: number, y: number): any
}
}
}
This is just a pretty-printed, but is coming in handy with debugging.
The NewExpression AST node type was missing a JSON annotation on
its Type field, leading to decoding errors.
Now, with this, the full suite of MuJS test cases can be unmarshaled
into fully populated MuPack and MuIL structures.
This change overhauls the approach to custom decoding. Instead of decoding
the parts of the struct that are "trivial" in one pass, and then patching up
the structure afterwards with custom decoding, the decoder itself understands
the notion of custom decoder functions.
First, the general purpose logic has moved out of pkg/pack/encoding and into
a new package, pkg/util/mapper. Most functions are now members of a new top-
level type, Mapper, which may be initialized with custom decoders. This
is a map from target type to a function that can decode objects into it.
Second, the AST-specific decoding logic is rewritten to use it. All AST nodes
are now supported, including definitions, statements, and expressions. The
overall approach here is to simply define a custom decoder for any interface
type that will occur in a node field position. The mapper, upon encountering
such a type, will consult the custom decoder map; if a decoder is found, it
will be used, otherwise an error results. This decoder then needs to switch
on the type discriminated kind field that is present in the metadata, creating
a concrete struct of the right type, and then converting it to the desired
interface type. Note that, subtly, interface types used only for "marker"
purposes don't require any custom decoding, because they do not appear in
field positions and therefore won't be encountered during the decoding process.
This change splits up the decoding logic into multiple files, to
mirror the AST package structure that the functions correspond to.
Additionally, there is now less "loose" reflection and dynamic lookup
code scattered throughout; it is now consolidated into the decoder,
with a set of "generic" functions like `fieldObject`, `asString`, etc.
This change implements custom class member decoding. As with module methods,
the function body AST nodes remain nil, as custom AST decoding isn't yet done.
This change begins to implement some of the AST custom decoding, beneath
the Package's Module map. In particular, we now unmarshal "one level"
beyond this, populating each Module's ModuleMember map. This includes
Classes, Exports, ModuleProperties, and ModuleMethods. The Class AST's
Members have been marked "custom", in addition to Block's Statements,
because they required kind-directed decoding. But Exports and
ModuleProperties can be decoded entirely using the tag-directed decoding
scheme. Up next, custom decoding of ClassMembers. At that point, all
definition-level decoding will be done, leaving MuIL's ASTs.
This fixes a few things so that MuPackages now unmarshal:
* Mark Module.Members as requiring "custom" decoding. This is required
because that's the first point in the tree that leverages polymorphism.
Everything "above" this unmarshals just fine (e.g., package+modules).
* As such, stop marking Package.Modules as "custom".
* Add the Kind field to the Node. Although we won't use this for type
discrimination in the same way, since Go gives us RTTI on the structs,
it is required for unmarshaling (to avoid "unrecognized fields" errors)
and it's probably handy to have around for logging, messages, etc.
* Mark Position.Line and Column with "json" annotations so that they
unmarshal correctly.
This change adjusts pointers correctly when unmarshaling into target
pointer types. This handles arrays and maps of pointer elements, in
addition to consolidating existing logic for marshaling into a
destination top-level pointer as well.
This change eliminates boilerplate decoding logic in all the different
data structures, and instead uses a new tag-directed decoding scheme.
This works a lot like the JSON deserializers, in that it recognizes the
`json:"name"` tags, except that we permit annotation of fields that
require custom deserialization, as `json:"name,custom"`. The existing
`json:"name,omitempty"` tag is recognized for optional fields.
This adds basic custom decoding for the MuPack metadata section of
the incoming JSON/YAML. Because of the type discriminated union nature
of the incoming payload, we cannot rely on the simple built-in JSON/YAML
unmarshaling behavior. Note that for the metadata section -- what is
in this checkin -- we could have, but the IL AST nodes are problematic.
(To know what kind of structure to creat requires inspecting the "kind"
field of the IL.) We will use a reflection-driven walk of the target
structure plus a weakly typed deserialized map[string]interface{}, as
is fairly customary in Go for scenarios like this (though good libaries
seem to be lacking in this area...).