This change adds a few more compiler tests and rearranges some bits and pieces
that came up while doing so. For example, we now issue warnings for incorrect
casing and/or extensions of the Mufile (and test these conditions). As part of
doing that, it became clear the layering between the mu/compiler and mu/workspace
packages wasn't quite right, so some logic got moved around; additionally, the
separation of concerns between mu/workspace and mu/schema wasn't quite right, so
this has been fixed also (workspace just understands Mufile related things while
schema understands how to unmarshal the specific supported extensions).
This change adds a compiler test that just checks the basic "Mufile is missing"
error checking. The test itself is mostly uninteresting; what's more interesting
is the addition of some basic helper functionality that can be used for future
compiler tests, like capturing of compiler diagnostics for comparisons.
Eric rightly pointed out in a CR that Semver is...different. Since it
is an abbreviation of the lengthier SemanticVersion name, SemVer seems
more appropriate. Changed.
This change recognizes .yml in addition to the official .yaml extension,
since .yml is actually very commonly used. In addition, while in here, I've
centralized more of the extensions logic so that it's more "data-driven"
and easier to manage down the road (one place to change rather than two).
Error messages could get quite lengthy as the code was written previously,
because we always used the complete absolute path for the file in question.
This change "prettifies" this to be relative to whatever contextual path
the user has chosen during compilation. This shortens messages considerably.
This change includes some progress on actual compilation (albeit with several
TODOs remaining before we can actually spit out a useful artifact). There are
also some general cleanups sprinkled throughout. In a nutshell:
* Add a compiler.Context object that will be available during template expansion.
* Introduce a diag.Document abstraction. This is better than passing raw filenames
around, and lets us embellish diagnostics as we go. In particular, we will be
in a better position to provide line/column error information.
* Move IO out of the Parser and into the Compiler, where it can cache and reuse
Documents. This will become important as we start to load up dependencies.
* Rename PosRange to Location. This reads nicer with the new Document terminology.
* Rename the mu/api package to mu/schema. It's likely we will need to introduce a
true AST that is decoupled from the serialization format and contains bound nodes.
As a result, treating the existing types as "schema" is more honest.
* Add in a big section of TODOs at the end of the compiler.Compiler.Build function.
* Rename Meta to Metadata.
* Rename Target's CloudOS and CloudScheduler properties to Cloud
and Scheduler, respectively. Also rename Target's JSON properties
to match (they had drifted); they are now "cloud" and "scheduler".
* Rename Diags() to Diag() on the Compiler and Parser interfaces.
* Rename defaultDiags to defaultSink, to match the interface name.
* Add a few useful logging outputs.
This adds a bunch of general scaffolding and the beginning of a `build` command.
The general engineering scaffolding includes:
* Glide for dependency management.
* A Makefile that runs govet and golint during builds.
* Google's Glog library for logging.
* Cobra for command line functionality.
The Mu-specific scaffolding includes some packages:
* mu/pkg/diag: A package for compiler-like diagnostics. It's fairly barebones
at the moment, however we can embellish this over time.
* mu/pkg/errors: A package containing Mu's predefined set of errors.
* mu/pkg/workspace: A package containing workspace-related convenience helpers.
in addition to a main entrypoint that simply wires up and invokes the CLI. From
there, the mu/cmd package takes over, with the Cobra-defined CLI commands.
Finally, the mu/pkg/compiler package actually implements the compiler behavior.
Or, it will. For now, it simply parses a JSON or YAML Mufile into the core
mu/pkg/api types, and prints out the result.
This change adds the core Mufile types (Cluster, Stack, and friends), including
mapping them to the relevant JSON names in the serialized form. Notably absent
are all of the Identity-related types, which will come later on.
This adds two hypothetical variants to the Mu demo, "building" new capabilities
up from the basic Express and MongoDB app we start with:
1) The first variant is to leverage the Mu SDK for service discovery and
configuration, eliminating the "loosely typed" approach that is common with
Docker (e.g., having to fetch connection URLs from arguments, environment
variables, etc). This extends the Mu type system "into" the Docker container,
whereas previously it stopped at the boundary.
2) The second variant really takes this to the next level, embellishing the nice
DSL-like approach. In this model, infrastructure is expressed in code.
Instead of manually creating a volume, we will arrange for the mongodb/mongodb
to auto-mount a mu/x/fs/volume, as per [the cloud-neutral design doc](/docs/xcloud.md).
This just adds a simple voting app for the demo script. It leverages
a Node.js Express app for the frontend and a MongoDB database for the backend.
It conditionally provisions and attaches to an AWS Elastic Block Store volume
in the event that it's targeting an AWS cluster, and ephemeral storage otherwise.
This change separates the metadata specification from the targets doc,
since they are both meant to be consumable independent of one another
and will evolve at their own paces.
This sketches out some more details in the Metadata format, in addition to
starting to articulate what the AWS IaaS provider looks like. Sadly, more
questions than answers, but it's good to have the placeholders...
Simple developer scenarios can be done solely in terms of Stacks. However, to
support more sophisticated cases -- where multiplexing many Stacks onto a shared
set of resources is preferred (for cost savings, provisioning time savings,
control, etc) -- we need to introduce the concept of a Cluster. Each Cluster has
a fixed combination of IaaS/CaaS provider, decided at provisioning time.
This just contains primarily scaffolding, however lays out a general direction
and table of contents for the metadata specification document (marapongo/mu#1).