Commit graph

259 commits

Author SHA1 Message Date
joeduffy bfe659017f Implement the mu apply command
This change implements `mu apply`, by driving compilation, evaluation,
planning, and then walking the plan and evaluating it.  This is the bulk
of marapongo/mu#21, except that there's a ton of testing/hardening to
perform, in addition to things like progress reporting.
2017-02-19 11:41:05 -08:00
joeduffy 09c01dd942 Implement resource provider plugins
This change adds basic support for discovering, loading, binding to,
and invoking RPC methods on, resource provider plugins.

In a nutshell, we add a new context object that will share cached
state such as loaded plugins and connections to them.  It will be
a policy decision in server scenarios how much state to share and
between whom.  This context also controls per-resource context
allocation, which in the future will allow us to perform structured
cancellation and teardown amongst entire groups of requests.

Plugins are loaded based on their name, and can be found in one of
two ways: either simply by having them on your path (with a name of
"mu-ressrv-<pkg>", where "<pkg>" is the resource package name with
any "/"s replaced with "_"s); or by placing them in the standard
library installation location, which need not be on the path for this
to work (since we know precisely where to look).

If we find a protocol, we will load it as a child process.

The protocol for plugins is that they will choose a port on their
own -- to eliminate races that'd be involved should Mu attempt to
pre-pick one for them -- and then write that out as the first line
to STDOUT (terminated by a "\n").  This is the only STDERR/STDOUT
that Mu cares about; from there, the plugin is free to write all it
pleases (e.g., for logging, debugging purposes, etc).

Afterwards, we then bind our gRPC connection to that port, and create
a typed resource provider client.  The CRUD operations that get driven
by plan application are then simple wrappers atop the underlying gRPC
calls.  For now, we interpret all errors as catastrophic; in the near
future, we will probably want to introduce a "structured error"
mechanism in the gRPC interface for "transactional errors"; that is,
errors for which the server was able to recover to a safe checkpoint,
which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 11:08:06 -08:00
joeduffy 9621aa7201 Implement deletion plans
This change adds a flag to `plan` so that we can create deletion plans:

    $ mu plan --delete

This will have an equivalent in the `apply` command, achieving the ability
to delete entire sets of resources altogether (see marapongo/mu#58).
2017-02-18 10:33:36 -08:00
joeduffy 6f42e1134b Create object monikers
This change introduces object monikers.  These are unique, serializable
names that refer to resources created during the execution of a MuIL
program.  They are pretty darned ugly at the moment, but at least they
serve their desired purpose.  I suspect we will eventually want to use
more information (like edge "labels" (variable names and what not)),
but this should suffice for the time being.  The names right now are
particularly sensitive to simple refactorings.

This is enough for marapongo/mu#69 during the current sprint, although
I will keep the work item (in a later sprint) to think more about how
to make these more stable.  I'd prefer to do that with a bit of
experience under our belts first.
2017-02-18 10:22:04 -08:00
joeduffy d9ee2429da Begin resource modeling and planning
This change introduces a new package, pkg/resource, that will form
the foundation for actually performing deployment plans and applications.

It contains the following key abstractions:

* resource.Provider is a wrapper around the CRUD operations exposed by
  underlying resource plugins.  It will eventually defer to resource.Plugin,
  which itself defers -- over an RPC interface -- to the actual plugin, one
  per package exposing resources.  The provider will also understand how to
  load, cache, and overall manage the lifetime of each plugin.

* resource.Resource is the actual resource object.  This is created from
  the overall evaluation object graph, but is simplified.  It contains only
  serializable properties, for example.  Inter-resource references are
  translated into serializable monikers as part of creating the resource.

* resource.Moniker is a serializable string that uniquely identifies
  a resource in the Mu system.  This is in contrast to resource IDs, which
  are generated by resource providers and generally opaque to the Mu
  system.  See marapongo/mu#69 for more information about monikers and some
  of their challenges (namely, designing a stable algorithm).

* resource.Snapshot is a "snapshot" taken from a graph of resources.  This
  is a transitive closure of state representing one possible configuration
  of a given environment.  This is what plans are created from.  Eventually,
  two snapshots will be diffable, in order to perform incremental updates.
  One way of thinking about this is that a snapshot of the old world's state
  is advanced, one step at a time, until it reaches a desired snapshot of
  the new world's state.

* resource.Plan is a plan for carrying out desired CRUD operations on a target
  environment.  Each plan consists of zero-to-many Steps, each of which has
  a CRUD operation type, a resource target, and a next step.  This is an
  enumerator because it is possible the plan will evolve -- and introduce new
  steps -- as it is carried out (hence, the Next() method).  At the moment, this
  is linearized; eventually, we want to make this more "graph-like" so that we
  can exploit available parallelism within the dependencies.

There are tons of TODOs remaining.  However, the `mu plan` command is functioning
with these new changes -- including colorization FTW -- so I'm landing it now.

This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 12:31:48 -08:00
joeduffy 28a13a28cb Slightly prettify the plan command
This just makes it a little easier to see what's going on.
2017-02-16 17:32:13 -08:00
joeduffy 25d626ed50 Implement prototype chaining
This change more accurately implements ECMAScript prototype chains.
This includes using the prototype chain to lookup properties when
necessary, and copying them down upon writes.

This still isn't 100% faithful -- for example, classes and
constructor functions should be represented as real objects with
the prototype link -- so that examples like those found here will
work: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/constructor.
I've updated marapongo/mu#70 with additional details about this.

I'm sure we'll be forced to fix this as we encounter more "dynamic"
JavaScript.  (In fact, it would be interesting to start running the
pre-ES6 output of TypeScript through the compiler as test cases.)

See http://www.ecma-international.org/ecma-262/6.0/#sec-objects
for additional details on the prototype chaining semantics.
2017-02-14 16:12:01 -08:00
joeduffy b47445490b Implement a very rudimentary plan command
This simply topologically sorts the resulting MuGL graph and, in
a super lame way, prints out the resources and their properties.
2017-02-13 14:26:46 -08:00
joeduffy 32960be0fb Use export tables
This change redoes the way module exports are represented.  The old
mechanism -- although laudible for its attempt at consistency -- was
wrong.  For example, consider this case:

    let v = 42;
    export { v };

The old code would silently add *two* members, both with the name "v",
one of which would be dropped since the entries in the map collided.

It would be easy enough just to detect collisions, and update the
above to mark "v" as public, when the export was encountered.  That
doesn't work either, as the following two examples demonstrate:

    let v = 42;
    export { v as w };
    let x = w; // error!

This demonstrates:

    * Exporting "v" with a different name, "w" to consumers of the
      module.  In particular, it should not be possible for module
      consumers to access the member through the name "v".

    * An inability to access the exported name "w" from within the
      module itself.  This is solely for external consumption.

Because of this, we will use an export table approach.  The exports
live alongside the members, and we are smart about when to consult
the export table, versus the member table, during name binding.
2017-02-13 09:56:25 -08:00
joeduffy 79f8b1bef7 Implement a DOT graph converter
This change adds a --dot option to the eval command, which will simply
output the MuGL graph using the DOT language.  This allows you to use
tools like Graphviz to inspect the resulting graph, including using the
`dot` command to generate images (like PNGs and whatnot).

For example, the simple MuGL program:

    class C extends mu.Resource {...}
    class B extends mu.Resource {...}
    class A extends mu.Resource {
        private b: B;
        private c: C;
        constructor() {
            this.b = new B();
            this.c = new C();
        }
    }
    let a = new A();

Results in the following DOT file, from `mu eval --dot`:

    strict digraph {
        Resource0 [label="A"];
        Resource0 -> {Resource1 Resource2}
        Resource1 [label="B"];
        Resource2 [label="C"];
    }

Eventually the auto-generated ResourceN identifiers will go away in
favor of using true object monikers (marapongo/mu#76).
2017-02-09 15:56:15 -08:00
joeduffy 15ba2949dc Add a mu verify command
This adds a `mu verify` command that simply runs the verification
pass against a MuPackage and its MuIL.  This is handy for compiler
authors to verify that the right stuff is getting emitted.
2017-02-03 19:27:37 -08:00
joeduffy 9adcc8cff9 Print the export token during describe 2017-02-02 15:03:51 -08:00
joeduffy 50f5ef7fb4 Print import tokens, not their AST nodes 2017-02-02 12:34:47 -08:00
joeduffy 054d928170 Add a very basic graph pretty printer
This is pretty worthless, but will help me debug some issues locally.
Eventually we want MuGL to be fully serializable, including the option
to emit DOT files.
2017-02-01 09:07:30 -08:00
joeduffy 8055108760 Rename mu compile to mu eval
During demos of the CLI, it began to get confusing to describe how
`mu compile` was different from, say, compiling MuJS source into the
MuPack.  It turns out compile is a bad name for what this command is
doing, because it's so much more than "compilation"; it's true that
it binds names, etc., much like a JIT compiler does to an intermediary
representation, however its core purpose is to actually *execute*
the code through evaluation.  So we will call it `mu eval` instead.

This change closes marapongo/mu#65.
2017-01-28 15:01:36 -08:00
joeduffy 289a0d405d Revive some compiler tests
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.

It also fixes up some bugs uncovered during this:

* Don't claim that a symbol's kind is incorrect in the binder error
  message when it wasn't found.  Instead, say that it was missing.

* Do not attempt to compile if an error was issued during workspace
  resolution and/or loading of the Mufile.  This leads to trying to
  load an empty path and badness quickly ensues (crash).

* Issue an error if the Mufile wasn't found (this got lost apparently).

* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
  since missing names are now caught by our new fancy decoder that
  understands required versus optional fields.  We still need to guard
  against illegal characters in the name, including the empty string "".

* During decoding, reject !src.IsValid elements.  This represents the
  zero value and should be treated equivalently to a missing field.

* Do not permit empty strings "" as Names or QNames.  The old logic
  accidentally permitted them because regexp.FindString("") == "", no
  matter the regex!

* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
  allowing us to share this common code across multiple package tests.

* Fix up a few messages that needed tidying or to use Infof vs. Info.

The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
2017-01-26 15:30:08 -08:00
joeduffy fb41c570b0 Only CompilePackage if pkg!=nil
If there's a problem loading the package metadata, we shouldn't attempt
to compile it afterwards.  Instead, just fall through, and exit cleanly.
2017-01-26 08:51:22 -08:00
joeduffy 281805311c Perform package and module evaluation
This change looks up the main module from a package, and that module's
entrypoint, when performing evaluation.  In any case, the arguments are
validated and bound to the resulting function's parameters.
2017-01-25 18:38:53 -08:00
joeduffy 25efb734da Move compiler options to pkg/compiler/core
The options structure will be shared between multiple passes of
compilation, including evaluation and graph generation.  Therefore,
it must not be in the pkg/compile package, else we would create
package cycles.  Now that the options structure is barebones --
and, in particular, no more "backend" settings pollute it -- this
refactoring actually works.
2017-01-23 08:10:49 -08:00
joeduffy bd231ff624 Implement fmt.Stringer on token types 2017-01-21 12:25:59 -08:00
joeduffy 4dc082a2df Use 1st class token AST nodes
Instead of serializing simple token strings into the AST -- in place of things
like type references, module references, export references, etc. -- we now use
1st class AST nodes.  This ensures that source context flows with the tokens
as we bind them, etc., and also cleans up a few inconsistencies (like using an
ast.Identifier for NewExpression -- clearly wrong since this the resulting
MuIL is meant to contain fully bound semantic references).
2017-01-21 11:48:58 -08:00
joeduffy 2c0266c9e4 Clean up package URL logic
This change rearranges the old way we dealt with URLs.  In the old system,
virtually every reference to an element, including types, was fully qualified
with a possible URL-like reference.  (The old pkg/tokens/Ref type.)  In the
new model, only dependency references are URL-like.  All maps and references
within the MuPack/MuIL format are token and name based, using the new
pkg/tokens/Token and pkg/tokens/Name family of related types.

As such, this change renames Ref to PackageURLString, and RefParts to
PackageURL.  (The convenient name is given to the thing with "more" structure,
since we prefer to deal with structured types and not strings.)  It moves
out of the pkg/tokens package and into pkg/pack, since it is exclusively
there to support package resolution.  Similarly, the Version, VersionSpec,
and related types move out of pkg/tokens and into pkg/pack.

This change cleans up the various binder, package, and workspace logic.
Most of these changes are a natural fallout of this overall restructuring,
although in a few places we remained sloppy about the difference between
Token, Name, and URL.  Now the type system supports these distinctions and
forces us to be more methodical about any conversions that take place.
2017-01-20 11:46:36 -08:00
joeduffy bf33605195 Rearrange the library code
This rearranges the library code:

* sdk/... goes away.

* What used to be sdk/javascript/ is now lib/mu/, an actual MuPackage
  that provides the base abstractions for all other MuPackages to use.

* lib/aws is the @mu/aws MuPackage that exposes all AWS resources.

* lib/mux is the @mu/x MuPackage that provides cross-cloud abstractions.
  A lot of what used to be in lib/mu goes here.  In particular, autoscaler,
  func, ..., all the "general purpose" abstractions, really.
2017-01-20 10:30:43 -08:00
joeduffy 729af81e44 Move all cloud switching to mu/x MuPackage
In the old system, the core runtime/toolset understood that we are targeting
specific cloud providers at a very deep level.  In fact, the whole code-generation
phase of the compiler was based on it.

In the new system, this difference is less of a "special" concern, and more of
a general one of mapping MuIL objects to resource providers, and letting *them*
gather up any configuration they need in a more general purpose way.

Therefore, most of this stuff can go.  I've merged in a small amount of it to
the mu/x MuPackage, since that has to switch on cloud IaaS and CaaS providers in
order to decide what kind of resources to provision.  For example, it has a
mu.x.Cluster stack type that itself provisions a lot of the barebone essential
resources, like a virtual private cloud and its associated networking components.

I suspect *some* knowledge of this will surface again as we implement more
runtime presence (discovery, etc).  But for the time being, it's a distraction
getting the core model running.  I've retained some of the old AWS code in the
new pkg/resource/providers/aws package, in case I want to reuse some of it when
implementing our first AWS resource providers.  (Although we won't be using
CloudFormation, some of the name generation code might be useful.)  So, the
ships aren't completely burned to the ground, but they are certainly on 🔥.
2017-01-20 09:46:59 -08:00
joeduffy 35fddddb78 Bind packages and modules
This change implements a significant amount of the top-level package
and module binding logic, including module and class members.  It also
begins whittling away at the legacy binder logic (which I expect will
disappear entirely in the next checkin).

The scope abstraction has been rewritten in terms of the new tokens
and symbols layers.  Each scope has a symbol table that associates
names with bound symbols, which can be used during lookup.  This
accomplishes lexical scoping of the symbol names, by pushing and
popping at the appropriate times.  I envision all name resolution to
happen during this single binding pass so that we needn't reconstruct
lexical scoping more than once.

Note that we need to do two passes at the top-level, however.  We
must first bind module-level member names to their symbols *before*
we bind any method bodies, otherwise legal intra-module references
might turn up empty-handed during this binding pass.

There is also a type table that associates types with ast.Nodes.
This is how we avoid needing a complete shadow tree of nodes, and/or
avoid needing to mutate the nodes in place.  Every node with a type
gets an entry in the type table.  For example, variable declarations,
expressions, and so on, each get an entry.  This ensures that we can
access type symbols throughout the subsequent passes without needing
to reconstruct scopes or emulating lexical scoping (as described above).

This is a work in progress, so there are a number of important TODOs
in there associated with symbol table management and body binding.
2017-01-19 13:37:47 -08:00
joeduffy 4874b2f7c6 Actually perform compilations from mu compile 2017-01-18 15:52:26 -08:00
joeduffy 25632886c8 Begin overhauling semantic phases
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler.  A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.

The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:

    * Split up the notion of symbols and tokens, resulting in:

        - pkg/symbols for true compiler symbols (bound nodes)
        - pkg/tokens for name-based tokens, identifiers, constants

    * Several packages move underneath pkg/compiler:

        - pkg/ast becomes pkg/compiler/ast
        - pkg/errors becomes pkg/compiler/errors
        - pkg/symbols becomes pkg/compiler/symbols

    * pkg/ast/... becomes pkg/compiler/legacy/ast/...

    * pkg/pack/ast becomes pkg/compiler/ast.

    * pkg/options goes away, merged back into pkg/compiler.

    * All binding functionality moves underneath a dedicated
      package, pkg/compiler/binder.  The legacy.go file contains
      cruft that will eventually go away, while the other files
      represent a halfway point between new and old, but are
      expected to stay roughly in the current shape.

    * All parsing functionality is moved underneath a new
      pkg/compiler/metadata namespace, and we adopt new terminology
      "metadata reading" since real parsing happens in the MetaMu
      compilers.  Hence, Parser has become metadata.Reader.

    * In general phases of the compiler no longer share access to
      the actual compiler.Compiler object.  Instead, shared state is
      moved to the core.Context object underneath pkg/compiler/core.

    * Dependency resolution during binding has been rewritten to
      the new model, including stashing bound package symbols in the
      context object, and detecting import cycles.

    * Compiler construction does not take a workspace object.  Instead,
      creation of a workspace is entirely hidden inside of the compiler's
      constructor logic.

    * There are three Compile* functions on the Compiler interface, to
      support different styles of invoking compilation: Compile() auto-
      detects a Mu package, based on the workspace; CompilePath(string)
      loads the target as a Mu package and compiles it, regardless of
      the workspace settings; and, CompilePackage(*pack.Package) will
      compile a pre-loaded package AST, again regardless of workspace.

    * Delete the _fe, _sema, and parsetree phases.  They are no longer
      relevant and the functionality is largely subsumed by the above.

...and so very much more.  I'm surprised I ever got this to compile again!
2017-01-18 12:18:37 -08:00
joeduffy 01658d04bb Begin merging MuPackage/MuIL into the compiler
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".

In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output.  Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.

An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.

The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object.  This is more natural anyway and moves some of the detection
logic "outside" of the Compiler.  Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.

Finally, all that templating crap is gone.  This alone is cause
for mass celebration!
2017-01-17 17:04:15 -08:00
joeduffy bc376f8f8d Move pkg/pack/symbols to pkg/symbols 2017-01-17 15:06:53 -08:00
joeduffy 7ea5331f7f Merge pkg/pack/encoding into pkg/encoding 2017-01-17 14:58:45 -08:00
joeduffy 901c1cc6cf Add scaffolding for mu apply, compile, and plan
This adds scaffolding but no real functionality yet, as part of
marapongo/mu#41.  I am landing this now because I need to take a
not-so-brief detour to gut and overhaul the core of the existing
compiler (parsing, semantic analysis, binding, code-gen, etc),
including merging the new pkg/pack contents back into the primary
top-level namespaces (like pkg/ast and pkg/encoding).

After that, I can begin driving the compiler to achieve the
desired effects of mu compile, first and foremost, and then plan
and apply later on.
2017-01-17 14:40:55 -08:00
joeduffy 15b043c0c1 Add a missing range check 2017-01-17 10:00:14 -08:00
joeduffy c576e7cae4 Print the imports in mu describe 2017-01-17 09:55:58 -08:00
joeduffy 2849a3e64b Print descriptions as header comments 2017-01-16 14:45:32 -08:00
joeduffy a2d847f1ef Use stable map enumeration
This change uses stable map enumeration so that output doesn't
change randomly based on hashing.
2017-01-16 12:02:33 -08:00
joeduffy b4a6c94a1b Print modifiers for class methods 2017-01-16 11:51:20 -08:00
joeduffy 2ee3671c36 Progress on the mu describe command
This change makes considerable progress on the `mu describe` command;
the only thing remaining to be implemented now is full IL printing.  It
now prints the full package/module structure.

For example, to print the set of exports from our scenarios/point test:

    $ mujs tools/mujs/tests/output/scenarios/point/ | mu describe - -e
    package "scenarios/point" {
	    dependencies []
	    module "index" {
		    class "Point" [public] {
			    method "add": (other: any): any
			    property "x" [public, readonly]: number
			    property "y" [public, readonly]: number
			    method ".ctor": (x: number, y: number): any
		    }
	    }
    }

This is just a pretty-printed, but is coming in handy with debugging.
2017-01-16 11:47:21 -08:00
joeduffy 14c040bc7f Implement custom decoding of ModuleMembers
This change begins to implement some of the AST custom decoding, beneath
the Package's Module map.  In particular, we now unmarshal "one level"
beyond this, populating each Module's ModuleMember map.  This includes
Classes, Exports, ModuleProperties, and ModuleMethods.  The Class AST's
Members have been marked "custom", in addition to Block's Statements,
because they required kind-directed decoding.  But Exports and
ModuleProperties can be decoded entirely using the tag-directed decoding
scheme.  Up next, custom decoding of ClassMembers.  At that point, all
definition-level decoding will be done, leaving MuIL's ASTs.
2017-01-15 14:57:42 -08:00
joeduffy f90679a717 Also print module names from describe 2017-01-14 11:22:26 -08:00
joeduffy 120f139812 Decode dependencies metadata 2017-01-14 07:58:21 -08:00
joeduffy d334ea322b Add custom decoding for MuPack metadata
This adds basic custom decoding for the MuPack metadata section of
the incoming JSON/YAML.  Because of the type discriminated union nature
of the incoming payload, we cannot rely on the simple built-in JSON/YAML
unmarshaling behavior.  Note that for the metadata section -- what is
in this checkin -- we could have, but the IL AST nodes are problematic.
(To know what kind of structure to creat requires inspecting the "kind"
field of the IL.)  We will use a reflection-driven walk of the target
structure plus a weakly typed deserialized map[string]interface{}, as
is fairly customary in Go for scenarios like this (though good libaries
seem to be lacking in this area...).
2017-01-14 07:40:13 -08:00
joeduffy cc16e85266 Add the scaffolding for a new mu describe command
This command will simply pretty-print the contents of a MuPackage.
My plan is to use it for my own development and debugging purposes,
however, I suspect it will be generally useful (since MuIL can be
quite verbose).  Again, just scaffolding, but I'll flesh it out
incrementally as more of the goo in here starts working...
2017-01-13 15:00:20 -08:00
joeduffy b408c3ce2a Pass compiler options to template evaluation
In some cases, we want to specialize template generation based on
the options passed to the compiler.  This change flows them through
so that they can be accessed as

        {{if .Options.SomeSetting}}
        ...
        {{end}}
2016-12-09 12:42:28 -08:00
joeduffy 38ec8d99ed Add a --skip-codegen option to the build command
This capability already exists (mostly for testing purposes);
expose it at the CLI so that we can easily flip it to true.
2016-12-06 20:49:54 -08:00
joeduffy 4f92a12d30 More flexible argument passing
This change lets you pass a stack argument as

        ... -- --arg "value"

in addition to the existing supported method of saying

        ... -- --arg="value"
2016-12-05 15:59:28 -08:00
joeduffy 80b9e05735 Rename --arch (-a) build switch to --target (-t) 2016-12-01 11:03:48 -08:00
joeduffy cb9c152104 Permit passing stack properties via the CLI
If compiling a stack that accepts properties directly, we need a way to
pass arguments to that stack at the command line.  This change permits this
using the ordinary "--" style delimiter; for example:

        $ mu build -- --name=Foo

This is super basic and doesn't handle all the edge cases, but is sufficient
for testing and prototyping purposes.
2016-11-29 15:27:02 -08:00
joeduffy 925ee92c60 Annotate a bunch of TODOs with work item numbers 2016-11-23 12:30:02 -08:00
joeduffy 5f3af891f7 Support Workspaces
This change adds support for Workspaces, a convenient way of sharing settings
among many Stacks, like default cluster targets, configuration settings, and the
like, which are not meant to be distributed as part of the Stack itself.

The following things are included in this checkin:

* At workspace initialization time, detect and parse the .mu/workspace.yaml
  file.  This is pretty rudimentary right now and contains just the default
  cluster targets.  The results are stored in a new ast.Workspace type.

* Rename "target" to "cluster".  This impacts many things, including ast.Target
  being changed to ast.Cluster, and all related fields, the command line --target
  being changed to --cluster, various internal helper functions, and so on.  This
  helps to reinforce the desired mental model.

* Eliminate the ast.Metadata type.  Instead, the metadata moves directly onto
  the Stack.  This reflects the decision to make Stacks "the thing" that is
  distributed, versioned, and is the granularity of dependency.

* During cluster targeting, add the workspace settings into the probing logic.
  We still search in the same order: CLI > Stack > Workspace.
2016-11-22 10:41:07 -08:00
joeduffy 728c22bed1 Remove a useless log message 2016-11-22 09:36:07 -08:00
joeduffy 536065bd57 Add a mu get command
This adds a `mu get` command for downloading Mu Stacks.  It isn't really
implemented yet, but puts a stake in the ground on how we intend this to work.
2016-11-19 16:44:25 -08:00
joeduffy b31c4467ac Move glogging into Mu command startup/teardown
This change moves glogging into the Mu command, so that `mu --help`
actually shows the right thing.  (Sadly, glog hijacks the command help.)
It also adds an option --logtostderr that redirects glog output to the
console.
2016-11-19 16:42:27 -08:00
joeduffy 0b34f256f0 Sketch out more AWS backend code-generator bits and pieces
This change includes a few steps towards AWS backend code-generation:

* Add a BoundDependencies property to ast.Stack to remember the *ast.Stack
  objects bound during Stack binding.

* Make a few CloudFormation properties optional (cfOutput Export/Condition).

* Rename clouds.ArchMap, clouds.ArchNames, schedulers.ArchMap, and
  schedulers.ArchNames to clouds.Values, clouds.Names, schedulers.Values,
  and schedulers.Names, respectively.  This reads much nicer to my eyes.

* Create a new anonymous ast.Target for deployments if no specific target
  was specified; this is to support quick-and-easy "one off" deployments,
  as will be common when doing local development.

* Sketch out more of the AWS Cloud implementation.  We actually map the
  Mu Services into CloudFormation Resources; well, kinda sorta, since we
  don't actually have Service-specific logic in here yet, however all of
  the structure and scaffolding is now here.
2016-11-18 16:46:36 -08:00
joeduffy 2a2c93f8ab Add a Backend interface, and dispatch to it
This change adds a Backend Phase to the compiler, implemented by each of the
cloud/scheduler implementations.  It also reorganizes some of the modules to
ensure we can do everything we need without cycles, including introducing the
mu/pkg/compiler/backends package, under which the clouds/ and schedulers/
sub-packages now reside.  The backends.New(Arch) factory function acts as the
entrypoint into the entire thing so callers can easily create new Backend instances.
2016-11-18 12:40:15 -08:00
joeduffy f8a84c4fa3 Set target and architecture from the command line 2016-11-17 12:29:10 -08:00
joeduffy 8a7fbf019c Add the ability to select a cloud provider
This adds two packages:

        mu/pkg/compiler/clouds
        mu/pkg/compiler/schedulers

And introduces enums for the cloud targets we expect to support.

It also adds the ability at the command line to specify a provider;
for example:

        $ mu build --target=aws         # AWS native
        $ mu build --target=aws:ecs     # AWS ECS
        $ mu build -t=gcp:kubernetes    # Kube on GCP
2016-11-17 07:00:52 -08:00
joeduffy 109d8c3f4f Add a way to control logging verbosity
This change adds a --verbose=x flag (-v=x shorthand) to the Mu command,
to set the glog logging level to x, for all subcommands.
2016-11-16 17:19:49 -08:00
joeduffy 79f5f312b8 Support .yml Mufile extensions
This change recognizes .yml in addition to the official .yaml extension,
since .yml is actually very commonly used.  In addition, while in here, I've
centralized more of the extensions logic so that it's more "data-driven"
and easier to manage down the road (one place to change rather than two).
2016-11-15 18:26:21 -08:00
joeduffy e75f06bb2b Sketch a mu build command and its scaffolding
This adds a bunch of general scaffolding and the beginning of a `build` command.

The general engineering scaffolding includes:

* Glide for dependency management.
* A Makefile that runs govet and golint during builds.
* Google's Glog library for logging.
* Cobra for command line functionality.

The Mu-specific scaffolding includes some packages:

* mu/pkg/diag: A package for compiler-like diagnostics.  It's fairly barebones
  at the moment, however we can embellish this over time.
* mu/pkg/errors: A package containing Mu's predefined set of errors.
* mu/pkg/workspace: A package containing workspace-related convenience helpers.

in addition to a main entrypoint that simply wires up and invokes the CLI.  From
there, the mu/cmd package takes over, with the Cobra-defined CLI commands.

Finally, the mu/pkg/compiler package actually implements the compiler behavior.
Or, it will.  For now, it simply parses a JSON or YAML Mufile into the core
mu/pkg/api types, and prints out the result.
2016-11-15 14:30:34 -05:00