* Specify MinCount/MaxCount when creating an EC2 instance. These
are required properties on the RunInstances API.
* Only attempt to unmarshal egressArray/ingressArray when non-nil.
* Remember the context object on the instanceProvider.
* Move the moniker and object maps into the shared context object.
* Marshal object monikers as the resource IDs to which they refer,
since monikers are useless on "the other side" of the RPC boundary.
This ensures that, for example, the AWS provider gets IDs it can use.
* Add some paranoia assertions.
This commit includes a basic AWS resource provider. Mostly it is just
scaffolding, however, it also includes prototype implementations for EC2
instance and security group resource creation operations.
This change implements `mu apply`, by driving compilation, evaluation,
planning, and then walking the plan and evaluating it. This is the bulk
of marapongo/mu#21, except that there's a ton of testing/hardening to
perform, in addition to things like progress reporting.
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.
In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them. It will be
a policy decision in server scenarios how much state to share and
between whom. This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.
Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).
If we find a protocol, we will load it as a child process.
The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).
Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client. The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls. For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
This change adds a flag to `plan` so that we can create deletion plans:
$ mu plan --delete
This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
This change introduces object monikers. These are unique, serializable
names that refer to resources created during the execution of a MuIL
program. They are pretty darned ugly at the moment, but at least they
serve their desired purpose. I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being. The names right now are
particularly sensitive to simple refactorings.
This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable. I'd prefer to do that with a bit of
experience under our belts first.
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.
It contains the following key abstractions:
* resource.Provider is a wrapper around the CRUD operations exposed by
underlying resource plugins. It will eventually defer to resource.Plugin,
which itself defers -- over an RPC interface -- to the actual plugin, one
per package exposing resources. The provider will also understand how to
load, cache, and overall manage the lifetime of each plugin.
* resource.Resource is the actual resource object. This is created from
the overall evaluation object graph, but is simplified. It contains only
serializable properties, for example. Inter-resource references are
translated into serializable monikers as part of creating the resource.
* resource.Moniker is a serializable string that uniquely identifies
a resource in the Mu system. This is in contrast to resource IDs, which
are generated by resource providers and generally opaque to the Mu
system. See marapongo/mu#69 for more information about monikers and some
of their challenges (namely, designing a stable algorithm).
* resource.Snapshot is a "snapshot" taken from a graph of resources. This
is a transitive closure of state representing one possible configuration
of a given environment. This is what plans are created from. Eventually,
two snapshots will be diffable, in order to perform incremental updates.
One way of thinking about this is that a snapshot of the old world's state
is advanced, one step at a time, until it reaches a desired snapshot of
the new world's state.
* resource.Plan is a plan for carrying out desired CRUD operations on a target
environment. Each plan consists of zero-to-many Steps, each of which has
a CRUD operation type, a resource target, and a next step. This is an
enumerator because it is possible the plan will evolve -- and introduce new
steps -- as it is carried out (hence, the Next() method). At the moment, this
is linearized; eventually, we want to make this more "graph-like" so that we
can exploit available parallelism within the dependencies.
There are tons of TODOs remaining. However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.
This is part of marapongo/mu#38 and marapongo/mu#41.
This change more accurately implements ECMAScript prototype chains.
This includes using the prototype chain to lookup properties when
necessary, and copying them down upon writes.
This still isn't 100% faithful -- for example, classes and
constructor functions should be represented as real objects with
the prototype link -- so that examples like those found here will
work: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/constructor.
I've updated marapongo/mu#70 with additional details about this.
I'm sure we'll be forced to fix this as we encounter more "dynamic"
JavaScript. (In fact, it would be interesting to start running the
pre-ES6 output of TypeScript through the compiler as test cases.)
See http://www.ecma-international.org/ecma-262/6.0/#sec-objects
for additional details on the prototype chaining semantics.
This change redoes the way module exports are represented. The old
mechanism -- although laudible for its attempt at consistency -- was
wrong. For example, consider this case:
let v = 42;
export { v };
The old code would silently add *two* members, both with the name "v",
one of which would be dropped since the entries in the map collided.
It would be easy enough just to detect collisions, and update the
above to mark "v" as public, when the export was encountered. That
doesn't work either, as the following two examples demonstrate:
let v = 42;
export { v as w };
let x = w; // error!
This demonstrates:
* Exporting "v" with a different name, "w" to consumers of the
module. In particular, it should not be possible for module
consumers to access the member through the name "v".
* An inability to access the exported name "w" from within the
module itself. This is solely for external consumption.
Because of this, we will use an export table approach. The exports
live alongside the members, and we are smart about when to consult
the export table, versus the member table, during name binding.
This change adds a --dot option to the eval command, which will simply
output the MuGL graph using the DOT language. This allows you to use
tools like Graphviz to inspect the resulting graph, including using the
`dot` command to generate images (like PNGs and whatnot).
For example, the simple MuGL program:
class C extends mu.Resource {...}
class B extends mu.Resource {...}
class A extends mu.Resource {
private b: B;
private c: C;
constructor() {
this.b = new B();
this.c = new C();
}
}
let a = new A();
Results in the following DOT file, from `mu eval --dot`:
strict digraph {
Resource0 [label="A"];
Resource0 -> {Resource1 Resource2}
Resource1 [label="B"];
Resource2 [label="C"];
}
Eventually the auto-generated ResourceN identifiers will go away in
favor of using true object monikers (marapongo/mu#76).
This adds a `mu verify` command that simply runs the verification
pass against a MuPackage and its MuIL. This is handy for compiler
authors to verify that the right stuff is getting emitted.
This is pretty worthless, but will help me debug some issues locally.
Eventually we want MuGL to be fully serializable, including the option
to emit DOT files.
During demos of the CLI, it began to get confusing to describe how
`mu compile` was different from, say, compiling MuJS source into the
MuPack. It turns out compile is a bad name for what this command is
doing, because it's so much more than "compilation"; it's true that
it binds names, etc., much like a JIT compiler does to an intermediary
representation, however its core purpose is to actually *execute*
the code through evaluation. So we will call it `mu eval` instead.
This change closesmarapongo/mu#65.
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.
It also fixes up some bugs uncovered during this:
* Don't claim that a symbol's kind is incorrect in the binder error
message when it wasn't found. Instead, say that it was missing.
* Do not attempt to compile if an error was issued during workspace
resolution and/or loading of the Mufile. This leads to trying to
load an empty path and badness quickly ensues (crash).
* Issue an error if the Mufile wasn't found (this got lost apparently).
* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
since missing names are now caught by our new fancy decoder that
understands required versus optional fields. We still need to guard
against illegal characters in the name, including the empty string "".
* During decoding, reject !src.IsValid elements. This represents the
zero value and should be treated equivalently to a missing field.
* Do not permit empty strings "" as Names or QNames. The old logic
accidentally permitted them because regexp.FindString("") == "", no
matter the regex!
* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
allowing us to share this common code across multiple package tests.
* Fix up a few messages that needed tidying or to use Infof vs. Info.
The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
This change looks up the main module from a package, and that module's
entrypoint, when performing evaluation. In any case, the arguments are
validated and bound to the resulting function's parameters.
The options structure will be shared between multiple passes of
compilation, including evaluation and graph generation. Therefore,
it must not be in the pkg/compile package, else we would create
package cycles. Now that the options structure is barebones --
and, in particular, no more "backend" settings pollute it -- this
refactoring actually works.
Instead of serializing simple token strings into the AST -- in place of things
like type references, module references, export references, etc. -- we now use
1st class AST nodes. This ensures that source context flows with the tokens
as we bind them, etc., and also cleans up a few inconsistencies (like using an
ast.Identifier for NewExpression -- clearly wrong since this the resulting
MuIL is meant to contain fully bound semantic references).
This change rearranges the old way we dealt with URLs. In the old system,
virtually every reference to an element, including types, was fully qualified
with a possible URL-like reference. (The old pkg/tokens/Ref type.) In the
new model, only dependency references are URL-like. All maps and references
within the MuPack/MuIL format are token and name based, using the new
pkg/tokens/Token and pkg/tokens/Name family of related types.
As such, this change renames Ref to PackageURLString, and RefParts to
PackageURL. (The convenient name is given to the thing with "more" structure,
since we prefer to deal with structured types and not strings.) It moves
out of the pkg/tokens package and into pkg/pack, since it is exclusively
there to support package resolution. Similarly, the Version, VersionSpec,
and related types move out of pkg/tokens and into pkg/pack.
This change cleans up the various binder, package, and workspace logic.
Most of these changes are a natural fallout of this overall restructuring,
although in a few places we remained sloppy about the difference between
Token, Name, and URL. Now the type system supports these distinctions and
forces us to be more methodical about any conversions that take place.
This rearranges the library code:
* sdk/... goes away.
* What used to be sdk/javascript/ is now lib/mu/, an actual MuPackage
that provides the base abstractions for all other MuPackages to use.
* lib/aws is the @mu/aws MuPackage that exposes all AWS resources.
* lib/mux is the @mu/x MuPackage that provides cross-cloud abstractions.
A lot of what used to be in lib/mu goes here. In particular, autoscaler,
func, ..., all the "general purpose" abstractions, really.
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level. In fact, the whole code-generation
phase of the compiler was based on it.
In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.
Therefore, most of this stuff can go. I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision. For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.
I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc). But for the time being, it's a distraction
getting the core model running. I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers. (Although we won't be using
CloudFormation, some of the name generation code might be useful.) So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
This change implements a significant amount of the top-level package
and module binding logic, including module and class members. It also
begins whittling away at the legacy binder logic (which I expect will
disappear entirely in the next checkin).
The scope abstraction has been rewritten in terms of the new tokens
and symbols layers. Each scope has a symbol table that associates
names with bound symbols, which can be used during lookup. This
accomplishes lexical scoping of the symbol names, by pushing and
popping at the appropriate times. I envision all name resolution to
happen during this single binding pass so that we needn't reconstruct
lexical scoping more than once.
Note that we need to do two passes at the top-level, however. We
must first bind module-level member names to their symbols *before*
we bind any method bodies, otherwise legal intra-module references
might turn up empty-handed during this binding pass.
There is also a type table that associates types with ast.Nodes.
This is how we avoid needing a complete shadow tree of nodes, and/or
avoid needing to mutate the nodes in place. Every node with a type
gets an entry in the type table. For example, variable declarations,
expressions, and so on, each get an entry. This ensures that we can
access type symbols throughout the subsequent passes without needing
to reconstruct scopes or emulating lexical scoping (as described above).
This is a work in progress, so there are a number of important TODOs
in there associated with symbol table management and body binding.
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler. A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.
The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:
* Split up the notion of symbols and tokens, resulting in:
- pkg/symbols for true compiler symbols (bound nodes)
- pkg/tokens for name-based tokens, identifiers, constants
* Several packages move underneath pkg/compiler:
- pkg/ast becomes pkg/compiler/ast
- pkg/errors becomes pkg/compiler/errors
- pkg/symbols becomes pkg/compiler/symbols
* pkg/ast/... becomes pkg/compiler/legacy/ast/...
* pkg/pack/ast becomes pkg/compiler/ast.
* pkg/options goes away, merged back into pkg/compiler.
* All binding functionality moves underneath a dedicated
package, pkg/compiler/binder. The legacy.go file contains
cruft that will eventually go away, while the other files
represent a halfway point between new and old, but are
expected to stay roughly in the current shape.
* All parsing functionality is moved underneath a new
pkg/compiler/metadata namespace, and we adopt new terminology
"metadata reading" since real parsing happens in the MetaMu
compilers. Hence, Parser has become metadata.Reader.
* In general phases of the compiler no longer share access to
the actual compiler.Compiler object. Instead, shared state is
moved to the core.Context object underneath pkg/compiler/core.
* Dependency resolution during binding has been rewritten to
the new model, including stashing bound package symbols in the
context object, and detecting import cycles.
* Compiler construction does not take a workspace object. Instead,
creation of a workspace is entirely hidden inside of the compiler's
constructor logic.
* There are three Compile* functions on the Compiler interface, to
support different styles of invoking compilation: Compile() auto-
detects a Mu package, based on the workspace; CompilePath(string)
loads the target as a Mu package and compiles it, regardless of
the workspace settings; and, CompilePackage(*pack.Package) will
compile a pre-loaded package AST, again regardless of workspace.
* Delete the _fe, _sema, and parsetree phases. They are no longer
relevant and the functionality is largely subsumed by the above.
...and so very much more. I'm surprised I ever got this to compile again!
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".
In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output. Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.
An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.
The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object. This is more natural anyway and moves some of the detection
logic "outside" of the Compiler. Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.
Finally, all that templating crap is gone. This alone is cause
for mass celebration!
This adds scaffolding but no real functionality yet, as part of
marapongo/mu#41. I am landing this now because I need to take a
not-so-brief detour to gut and overhaul the core of the existing
compiler (parsing, semantic analysis, binding, code-gen, etc),
including merging the new pkg/pack contents back into the primary
top-level namespaces (like pkg/ast and pkg/encoding).
After that, I can begin driving the compiler to achieve the
desired effects of mu compile, first and foremost, and then plan
and apply later on.
This change makes considerable progress on the `mu describe` command;
the only thing remaining to be implemented now is full IL printing. It
now prints the full package/module structure.
For example, to print the set of exports from our scenarios/point test:
$ mujs tools/mujs/tests/output/scenarios/point/ | mu describe - -e
package "scenarios/point" {
dependencies []
module "index" {
class "Point" [public] {
method "add": (other: any): any
property "x" [public, readonly]: number
property "y" [public, readonly]: number
method ".ctor": (x: number, y: number): any
}
}
}
This is just a pretty-printed, but is coming in handy with debugging.
This change begins to implement some of the AST custom decoding, beneath
the Package's Module map. In particular, we now unmarshal "one level"
beyond this, populating each Module's ModuleMember map. This includes
Classes, Exports, ModuleProperties, and ModuleMethods. The Class AST's
Members have been marked "custom", in addition to Block's Statements,
because they required kind-directed decoding. But Exports and
ModuleProperties can be decoded entirely using the tag-directed decoding
scheme. Up next, custom decoding of ClassMembers. At that point, all
definition-level decoding will be done, leaving MuIL's ASTs.
This adds basic custom decoding for the MuPack metadata section of
the incoming JSON/YAML. Because of the type discriminated union nature
of the incoming payload, we cannot rely on the simple built-in JSON/YAML
unmarshaling behavior. Note that for the metadata section -- what is
in this checkin -- we could have, but the IL AST nodes are problematic.
(To know what kind of structure to creat requires inspecting the "kind"
field of the IL.) We will use a reflection-driven walk of the target
structure plus a weakly typed deserialized map[string]interface{}, as
is fairly customary in Go for scenarios like this (though good libaries
seem to be lacking in this area...).
This command will simply pretty-print the contents of a MuPackage.
My plan is to use it for my own development and debugging purposes,
however, I suspect it will be generally useful (since MuIL can be
quite verbose). Again, just scaffolding, but I'll flesh it out
incrementally as more of the goo in here starts working...
In some cases, we want to specialize template generation based on
the options passed to the compiler. This change flows them through
so that they can be accessed as
{{if .Options.SomeSetting}}
...
{{end}}