Commit graph

81 commits

Author SHA1 Message Date
joeduffy
fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy
e0440ad312 Print step op labels 2017-02-24 17:44:54 -08:00
joeduffy
b43c374905 Fix a few more things about updates
* Eliminate some superfluous "\n"s.

* Remove the redundant properties stored on AWS resources.

* Compute array diff lengths properly (+1).

* Display object property changes from null to non-null as
  adds; and from non-null to null as deletes.

* Fix a boolean expression from ||s to &&s.  (Bone-headed).
2017-02-24 17:02:02 -08:00
joeduffy
53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy
877fa131eb Detect duplicate object names
This change detects duplicate object names (monikers) and issues a nice
error message with source context include.  For example:

    index.ts(260,22): error MU2006: Duplicate objects with the same name:
        prod::ec2instance:index::aws:ec2/securityGroup:SecurityGroup::group

The prior code asserted and failed abruptly, whereas this actually points
us to the offending line of code:

    let group1 = new aws.ec2.SecurityGroup("group", { ... });
    let group2 = new aws.ec2.SecurityGroup("group", { ... });
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
2017-02-24 16:03:06 -08:00
joeduffy
c120f62964 Redo object monikers
This change overhauls the way we do object monikers.  The old mechanism,
generating monikers using graph paths, was far too brittle and prone to
collisions.  The new approach mixes some amount of "automatic scoping"
plus some "explicit naming."  Although there is some explicitness, this
is arguably a good thing, as the monikers will be relatable back to the
source more readily by developers inspecting the graph and resource state.

Each moniker has four parts:

    <Namespace>::<AllocModule>::<Type>::<Name>

wherein each element is the following:

    <Namespace>     The namespace being deployed into
    <AllocModule>   The module in which the object was allocated
    <Type>          The type of the resource
    <Name>          The assigned name of the resource

The <Namespace> is essentially the deployment target -- so "prod",
"stage", etc -- although it is more general purpose to allow for future
namespacing within a target (e.g., "prod/customer1", etc); for now
this is rudimentary, however, see marapongo/mu#94.

The <AllocModule> is the token for the code that contained the 'new'
that led to this object being created.  In the future, we may wish to
extend this to also track the module under evaluation.  (This is a nice
aspect of monikers; they can become arbitrarily complex, so long as
they are precise, and not prone to false positives/negatives.)

The <Name> warrants more discussion.  The resource provider is consulted
via a new gRPC method, Name, that fetches the name.  How the provider
does this is entirely up to it.  For some resource types, the resource
may have properties that developers must set (e.g., `new Bucket("foo")`);
for other providers, perhaps the resource intrinsically has a property
that explicitly and uniquely qualifies the object (e.g., AWS SecurityGroups,
via `new SecurityGroup({groupName: "my-sg"}`); and finally, it's conceivable
that a provider might auto-generate the name (e.g., such as an AWS Lambda
whose name could simply be a hash of the source code contents).

This should overall produce better results with respect to moniker
collisions, ability to match resources, and the usability of the system.
2017-02-24 14:50:02 -08:00
joeduffy
9dc75da159 Diff and colorize update outputs
This change implements detailed object diffing for puposes of displaying
(and colorizing) updated properties during an update deployment.
2017-02-23 19:03:22 -08:00
joeduffy
86bfe5961d Implement updates
This change is a first whack at implementing updates.

Creation and deletion plans are pretty straightforward; we just take
a single graph, topologically sort it, and perform the operations in
the right order.  For creation, this is in dependency order (things
that are depended upon must be created before dependents); for deletion,
this is in reverse-dependency order (things that depend on others must
be deleted before dependencies).  These are just special cases of the more
general idea of performing DAG operations in dependency order.

Updates must work in terms of this more general notion.  For example:

* It is an error to delete a resource while another refers to it; thus,
  resources are deleted after deleting dependents, or after updating
  dependent properties that reference the resource to new values.

* It is an error to depend on a create a resource before it is created;
  thus, resources must be created before dependents are created, and/or
  before updates to existing resource properties that would cause them
  to refer to the new resource.

Of course, all of this is tangled up in a graph of dependencies.  As a
result, we must create a DAG of the dependencies between creates, updates,
and deletes, and then topologically sort this DAG, in order to determine
the proper order of update operations.

To do this, we slightly generalize the existing graph infrastructure,
while also specializing two kinds of graphs; the existing one becomes a
heapstate.ObjectGraph, while this new one is resource.planGraph (internal).
2017-02-23 14:56:23 -08:00
joeduffy
f00b146481 Echo resource provider outputs
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it.  In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error.  This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
2017-02-22 18:53:36 -08:00
joeduffy
2088eef6e9 Provision security group ingress/egress rules 2017-02-22 18:10:36 -08:00
joeduffy
ae99e957f9 Fix a few messages and assertions 2017-02-22 14:43:08 -08:00
joeduffy
9c2013baf0 Implement resource snapshot deserialization 2017-02-22 14:32:03 -08:00
joeduffy
8d71771391 Repivot plan/apply commands; prepare for updates
This change repivots the plan/apply commands slightly.  This is largely
in preparation for performing deletes and updates of existing environments.

The old way was slightly confusing and made things appear more "magical"
than they actually are.  Namely, different things are needed for different
kinds of deployment operations, and trying to present them each underneath
a single pair of CLI commands just leads to weird modality and options.

The new way is to offer three commands: create, update, and delete.  Each
does what it says on the tin: create provisions a new environment, update
makes resource updates to an existing one, and delete tears down an existing
one entirely.  The arguments are what make this interesting: create demands
a MuPackage to evaluate (producing the new desired state snapshot), update
takes *both* an existing snapshot file plus a MuPackage to evaluate (producing
the new desired state snapshot to diff against the existing one), and delete
merely takes an existing snapshot file and no MuPackage, since all it must
do is tear down an existing known environment.

Replacing the plan functionality is the --dry-run (-n) flag that may be
passed to any of the above commands.  This will print out the plan without
actually performing any opterations.

All commands produce serializable resource files in the MuGL file format,
and attempt to do smart things with respect to backups, etc., to support the
intended "Git-oriented" workflow of the pure CLI dev experience.
2017-02-22 11:21:26 -08:00
joeduffy
26cac1af3a Move colors into a central location
Per Eric's suggestion, this moves the colors we use into a central
location so that it'll be easier someday down the road to reconfigure
and/or disable them, etc.  This does not include a --no-colors option
although we should really include this soon before it gets too hairy.
2017-02-21 18:49:51 -08:00
joeduffy
d4911ad6f6 Implement snapshot MuGL
This change adds support for serializing snapshots in MuGL, per the
design document in docs/design/mugl.md.  At the moment, it is only
exposed from the `mu plan` command, which allows you to specify an
output location using `mu plan --output=file.json` (or `-o=file.json`
for short).  This serializes the snapshot with monikers, resources,
and so on.  Deserialization is not yet supported; that comes next.
2017-02-21 18:31:43 -08:00
joeduffy
85ba692832 Print the dependency key and value during describe 2017-02-21 12:30:31 -08:00
joeduffy
dcb6c958b3 Add a summary to the apply command 2017-02-20 18:00:36 -08:00
joeduffy
0efb8bdd69 Fix a few things
* Specify MinCount/MaxCount when creating an EC2 instance.  These
  are required properties on the RunInstances API.

* Only attempt to unmarshal egressArray/ingressArray when non-nil.

* Remember the context object on the instanceProvider.

* Move the moniker and object maps into the shared context object.

* Marshal object monikers as the resource IDs to which they refer,
  since monikers are useless on "the other side" of the RPC boundary.
  This ensures that, for example, the AWS provider gets IDs it can use.

* Add some paranoia assertions.
2017-02-20 13:55:09 -08:00
joeduffy
81158d0fc2 Make property logic nil-sensitive
...and add some handy plugin-oriented logging.
2017-02-20 13:27:31 -08:00
joeduffy
276b6c253d Implement a basic AWS resource provider
This commit includes a basic AWS resource provider.  Mostly it is just
scaffolding, however, it also includes prototype implementations for EC2
instance and security group resource creation operations.
2017-02-20 11:18:47 -08:00
joeduffy
fdb787f0c9 Add summary support, and colorization 2017-02-19 12:17:45 -08:00
joeduffy
3618837092 Add plan apply progress reporting 2017-02-19 11:58:04 -08:00
joeduffy
bfe659017f Implement the mu apply command
This change implements `mu apply`, by driving compilation, evaluation,
planning, and then walking the plan and evaluating it.  This is the bulk
of marapongo/mu#21, except that there's a ton of testing/hardening to
perform, in addition to things like progress reporting.
2017-02-19 11:41:05 -08:00
joeduffy
09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy
9621aa7201 Implement deletion plans
This change adds a flag to `plan` so that we can create deletion plans:

    $ mu plan --delete

This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
2017-02-18 10:33:36 -08:00
joeduffy
6f42e1134b Create object monikers
This change introduces object monikers.  These are unique, serializable
names that refer to resources created during the execution of a MuIL
program.  They are pretty darned ugly at the moment, but at least they
serve their desired purpose.  I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being.  The names right now are
particularly sensitive to simple refactorings.

This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable.  I'd prefer to do that with a bit of
experience under our belts first.
2017-02-18 10:22:04 -08:00
joeduffy
d9ee2429da Begin resource modeling and planning
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.

It contains the following key abstractions:

* resource.Provider is a wrapper around the CRUD operations exposed by
  underlying resource plugins.  It will eventually defer to resource.Plugin,
  which itself defers -- over an RPC interface -- to the actual plugin, one
  per package exposing resources.  The provider will also understand how to
  load, cache, and overall manage the lifetime of each plugin.

* resource.Resource is the actual resource object.  This is created from
  the overall evaluation object graph, but is simplified.  It contains only
  serializable properties, for example.  Inter-resource references are
  translated into serializable monikers as part of creating the resource.

* resource.Moniker is a serializable string that uniquely identifies
  a resource in the Mu system.  This is in contrast to resource IDs, which
  are generated by resource providers and generally opaque to the Mu
  system.  See marapongo/mu#69 for more information about monikers and some
  of their challenges (namely, designing a stable algorithm).

* resource.Snapshot is a "snapshot" taken from a graph of resources.  This
  is a transitive closure of state representing one possible configuration
  of a given environment.  This is what plans are created from.  Eventually,
  two snapshots will be diffable, in order to perform incremental updates.
  One way of thinking about this is that a snapshot of the old world's state
  is advanced, one step at a time, until it reaches a desired snapshot of
  the new world's state.

* resource.Plan is a plan for carrying out desired CRUD operations on a target
  environment.  Each plan consists of zero-to-many Steps, each of which has
  a CRUD operation type, a resource target, and a next step.  This is an
  enumerator because it is possible the plan will evolve -- and introduce new
  steps -- as it is carried out (hence, the Next() method).  At the moment, this
  is linearized; eventually, we want to make this more "graph-like" so that we
  can exploit available parallelism within the dependencies.

There are tons of TODOs remaining.  However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.

This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 12:31:48 -08:00
joeduffy
28a13a28cb Slightly prettify the plan command
This just makes it a little easier to see what's going on.
2017-02-16 17:32:13 -08:00
joeduffy
25d626ed50 Implement prototype chaining
This change more accurately implements ECMAScript prototype chains.
This includes using the prototype chain to lookup properties when
necessary, and copying them down upon writes.

This still isn't 100% faithful -- for example, classes and
constructor functions should be represented as real objects with
the prototype link -- so that examples like those found here will
work: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/constructor.
I've updated marapongo/mu#70 with additional details about this.

I'm sure we'll be forced to fix this as we encounter more "dynamic"
JavaScript.  (In fact, it would be interesting to start running the
pre-ES6 output of TypeScript through the compiler as test cases.)

See http://www.ecma-international.org/ecma-262/6.0/#sec-objects
for additional details on the prototype chaining semantics.
2017-02-14 16:12:01 -08:00
joeduffy
b47445490b Implement a very rudimentary plan command
This simply topologically sorts the resulting MuGL graph and, in
a super lame way, prints out the resources and their properties.
2017-02-13 14:26:46 -08:00
joeduffy
32960be0fb Use export tables
This change redoes the way module exports are represented.  The old
mechanism -- although laudible for its attempt at consistency -- was
wrong.  For example, consider this case:

    let v = 42;
    export { v };

The old code would silently add *two* members, both with the name "v",
one of which would be dropped since the entries in the map collided.

It would be easy enough just to detect collisions, and update the
above to mark "v" as public, when the export was encountered.  That
doesn't work either, as the following two examples demonstrate:

    let v = 42;
    export { v as w };
    let x = w; // error!

This demonstrates:

    * Exporting "v" with a different name, "w" to consumers of the
      module.  In particular, it should not be possible for module
      consumers to access the member through the name "v".

    * An inability to access the exported name "w" from within the
      module itself.  This is solely for external consumption.

Because of this, we will use an export table approach.  The exports
live alongside the members, and we are smart about when to consult
the export table, versus the member table, during name binding.
2017-02-13 09:56:25 -08:00
joeduffy
79f8b1bef7 Implement a DOT graph converter
This change adds a --dot option to the eval command, which will simply
output the MuGL graph using the DOT language.  This allows you to use
tools like Graphviz to inspect the resulting graph, including using the
`dot` command to generate images (like PNGs and whatnot).

For example, the simple MuGL program:

    class C extends mu.Resource {...}
    class B extends mu.Resource {...}
    class A extends mu.Resource {
        private b: B;
        private c: C;
        constructor() {
            this.b = new B();
            this.c = new C();
        }
    }
    let a = new A();

Results in the following DOT file, from `mu eval --dot`:

    strict digraph {
        Resource0 [label="A"];
        Resource0 -> {Resource1 Resource2}
        Resource1 [label="B"];
        Resource2 [label="C"];
    }

Eventually the auto-generated ResourceN identifiers will go away in
favor of using true object monikers (marapongo/mu#76).
2017-02-09 15:56:15 -08:00
joeduffy
15ba2949dc Add a mu verify command
This adds a `mu verify` command that simply runs the verification
pass against a MuPackage and its MuIL.  This is handy for compiler
authors to verify that the right stuff is getting emitted.
2017-02-03 19:27:37 -08:00
joeduffy
9adcc8cff9 Print the export token during describe 2017-02-02 15:03:51 -08:00
joeduffy
50f5ef7fb4 Print import tokens, not their AST nodes 2017-02-02 12:34:47 -08:00
joeduffy
054d928170 Add a very basic graph pretty printer
This is pretty worthless, but will help me debug some issues locally.
Eventually we want MuGL to be fully serializable, including the option
to emit DOT files.
2017-02-01 09:07:30 -08:00
joeduffy
8055108760 Rename mu compile to mu eval
During demos of the CLI, it began to get confusing to describe how
`mu compile` was different from, say, compiling MuJS source into the
MuPack.  It turns out compile is a bad name for what this command is
doing, because it's so much more than "compilation"; it's true that
it binds names, etc., much like a JIT compiler does to an intermediary
representation, however its core purpose is to actually *execute*
the code through evaluation.  So we will call it `mu eval` instead.

This change closes marapongo/mu#65.
2017-01-28 15:01:36 -08:00
joeduffy
289a0d405d Revive some compiler tests
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.

It also fixes up some bugs uncovered during this:

* Don't claim that a symbol's kind is incorrect in the binder error
  message when it wasn't found.  Instead, say that it was missing.

* Do not attempt to compile if an error was issued during workspace
  resolution and/or loading of the Mufile.  This leads to trying to
  load an empty path and badness quickly ensues (crash).

* Issue an error if the Mufile wasn't found (this got lost apparently).

* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
  since missing names are now caught by our new fancy decoder that
  understands required versus optional fields.  We still need to guard
  against illegal characters in the name, including the empty string "".

* During decoding, reject !src.IsValid elements.  This represents the
  zero value and should be treated equivalently to a missing field.

* Do not permit empty strings "" as Names or QNames.  The old logic
  accidentally permitted them because regexp.FindString("") == "", no
  matter the regex!

* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
  allowing us to share this common code across multiple package tests.

* Fix up a few messages that needed tidying or to use Infof vs. Info.

The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
2017-01-26 15:30:08 -08:00
joeduffy
fb41c570b0 Only CompilePackage if pkg!=nil
If there's a problem loading the package metadata, we shouldn't attempt
to compile it afterwards.  Instead, just fall through, and exit cleanly.
2017-01-26 08:51:22 -08:00
joeduffy
281805311c Perform package and module evaluation
This change looks up the main module from a package, and that module's
entrypoint, when performing evaluation.  In any case, the arguments are
validated and bound to the resulting function's parameters.
2017-01-25 18:38:53 -08:00
joeduffy
25efb734da Move compiler options to pkg/compiler/core
The options structure will be shared between multiple passes of
compilation, including evaluation and graph generation.  Therefore,
it must not be in the pkg/compile package, else we would create
package cycles.  Now that the options structure is barebones --
and, in particular, no more "backend" settings pollute it -- this
refactoring actually works.
2017-01-23 08:10:49 -08:00
joeduffy
bd231ff624 Implement fmt.Stringer on token types 2017-01-21 12:25:59 -08:00
joeduffy
4dc082a2df Use 1st class token AST nodes
Instead of serializing simple token strings into the AST -- in place of things
like type references, module references, export references, etc. -- we now use
1st class AST nodes.  This ensures that source context flows with the tokens
as we bind them, etc., and also cleans up a few inconsistencies (like using an
ast.Identifier for NewExpression -- clearly wrong since this the resulting
MuIL is meant to contain fully bound semantic references).
2017-01-21 11:48:58 -08:00
joeduffy
2c0266c9e4 Clean up package URL logic
This change rearranges the old way we dealt with URLs.  In the old system,
virtually every reference to an element, including types, was fully qualified
with a possible URL-like reference.  (The old pkg/tokens/Ref type.)  In the
new model, only dependency references are URL-like.  All maps and references
within the MuPack/MuIL format are token and name based, using the new
pkg/tokens/Token and pkg/tokens/Name family of related types.

As such, this change renames Ref to PackageURLString, and RefParts to
PackageURL.  (The convenient name is given to the thing with "more" structure,
since we prefer to deal with structured types and not strings.)  It moves
out of the pkg/tokens package and into pkg/pack, since it is exclusively
there to support package resolution.  Similarly, the Version, VersionSpec,
and related types move out of pkg/tokens and into pkg/pack.

This change cleans up the various binder, package, and workspace logic.
Most of these changes are a natural fallout of this overall restructuring,
although in a few places we remained sloppy about the difference between
Token, Name, and URL.  Now the type system supports these distinctions and
forces us to be more methodical about any conversions that take place.
2017-01-20 11:46:36 -08:00
joeduffy
bf33605195 Rearrange the library code
This rearranges the library code:

* sdk/... goes away.

* What used to be sdk/javascript/ is now lib/mu/, an actual MuPackage
  that provides the base abstractions for all other MuPackages to use.

* lib/aws is the @mu/aws MuPackage that exposes all AWS resources.

* lib/mux is the @mu/x MuPackage that provides cross-cloud abstractions.
  A lot of what used to be in lib/mu goes here.  In particular, autoscaler,
  func, ..., all the "general purpose" abstractions, really.
2017-01-20 10:30:43 -08:00
joeduffy
729af81e44 Move all cloud switching to mu/x MuPackage
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level.  In fact, the whole code-generation
phase of the compiler was based on it.

In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.

Therefore, most of this stuff can go.  I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision.  For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.

I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc).  But for the time being, it's a distraction
getting the core model running.  I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers.  (Although we won't be using
CloudFormation, some of the name generation code might be useful.)  So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
2017-01-20 09:46:59 -08:00
joeduffy
35fddddb78 Bind packages and modules
This change implements a significant amount of the top-level package
and module binding logic, including module and class members.  It also
begins whittling away at the legacy binder logic (which I expect will
disappear entirely in the next checkin).

The scope abstraction has been rewritten in terms of the new tokens
and symbols layers.  Each scope has a symbol table that associates
names with bound symbols, which can be used during lookup.  This
accomplishes lexical scoping of the symbol names, by pushing and
popping at the appropriate times.  I envision all name resolution to
happen during this single binding pass so that we needn't reconstruct
lexical scoping more than once.

Note that we need to do two passes at the top-level, however.  We
must first bind module-level member names to their symbols *before*
we bind any method bodies, otherwise legal intra-module references
might turn up empty-handed during this binding pass.

There is also a type table that associates types with ast.Nodes.
This is how we avoid needing a complete shadow tree of nodes, and/or
avoid needing to mutate the nodes in place.  Every node with a type
gets an entry in the type table.  For example, variable declarations,
expressions, and so on, each get an entry.  This ensures that we can
access type symbols throughout the subsequent passes without needing
to reconstruct scopes or emulating lexical scoping (as described above).

This is a work in progress, so there are a number of important TODOs
in there associated with symbol table management and body binding.
2017-01-19 13:37:47 -08:00
joeduffy
4874b2f7c6 Actually perform compilations from mu compile 2017-01-18 15:52:26 -08:00
joeduffy
25632886c8 Begin overhauling semantic phases
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler.  A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.

The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:

    * Split up the notion of symbols and tokens, resulting in:

        - pkg/symbols for true compiler symbols (bound nodes)
        - pkg/tokens for name-based tokens, identifiers, constants

    * Several packages move underneath pkg/compiler:

        - pkg/ast becomes pkg/compiler/ast
        - pkg/errors becomes pkg/compiler/errors
        - pkg/symbols becomes pkg/compiler/symbols

    * pkg/ast/... becomes pkg/compiler/legacy/ast/...

    * pkg/pack/ast becomes pkg/compiler/ast.

    * pkg/options goes away, merged back into pkg/compiler.

    * All binding functionality moves underneath a dedicated
      package, pkg/compiler/binder.  The legacy.go file contains
      cruft that will eventually go away, while the other files
      represent a halfway point between new and old, but are
      expected to stay roughly in the current shape.

    * All parsing functionality is moved underneath a new
      pkg/compiler/metadata namespace, and we adopt new terminology
      "metadata reading" since real parsing happens in the MetaMu
      compilers.  Hence, Parser has become metadata.Reader.

    * In general phases of the compiler no longer share access to
      the actual compiler.Compiler object.  Instead, shared state is
      moved to the core.Context object underneath pkg/compiler/core.

    * Dependency resolution during binding has been rewritten to
      the new model, including stashing bound package symbols in the
      context object, and detecting import cycles.

    * Compiler construction does not take a workspace object.  Instead,
      creation of a workspace is entirely hidden inside of the compiler's
      constructor logic.

    * There are three Compile* functions on the Compiler interface, to
      support different styles of invoking compilation: Compile() auto-
      detects a Mu package, based on the workspace; CompilePath(string)
      loads the target as a Mu package and compiles it, regardless of
      the workspace settings; and, CompilePackage(*pack.Package) will
      compile a pre-loaded package AST, again regardless of workspace.

    * Delete the _fe, _sema, and parsetree phases.  They are no longer
      relevant and the functionality is largely subsumed by the above.

...and so very much more.  I'm surprised I ever got this to compile again!
2017-01-18 12:18:37 -08:00
joeduffy
01658d04bb Begin merging MuPackage/MuIL into the compiler
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".

In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output.  Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.

An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.

The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object.  This is more natural anyway and moves some of the detection
logic "outside" of the Compiler.  Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.

Finally, all that templating crap is gone.  This alone is cause
for mass celebration!
2017-01-17 17:04:15 -08:00