* Rename Meta to Metadata.
* Rename Target's CloudOS and CloudScheduler properties to Cloud
and Scheduler, respectively. Also rename Target's JSON properties
to match (they had drifted); they are now "cloud" and "scheduler".
* Rename Diags() to Diag() on the Compiler and Parser interfaces.
* Rename defaultDiags to defaultSink, to match the interface name.
* Add a few useful logging outputs.
This adds a bunch of general scaffolding and the beginning of a `build` command.
The general engineering scaffolding includes:
* Glide for dependency management.
* A Makefile that runs govet and golint during builds.
* Google's Glog library for logging.
* Cobra for command line functionality.
The Mu-specific scaffolding includes some packages:
* mu/pkg/diag: A package for compiler-like diagnostics. It's fairly barebones
at the moment, however we can embellish this over time.
* mu/pkg/errors: A package containing Mu's predefined set of errors.
* mu/pkg/workspace: A package containing workspace-related convenience helpers.
in addition to a main entrypoint that simply wires up and invokes the CLI. From
there, the mu/cmd package takes over, with the Cobra-defined CLI commands.
Finally, the mu/pkg/compiler package actually implements the compiler behavior.
Or, it will. For now, it simply parses a JSON or YAML Mufile into the core
mu/pkg/api types, and prints out the result.
This change adds the core Mufile types (Cluster, Stack, and friends), including
mapping them to the relevant JSON names in the serialized form. Notably absent
are all of the Identity-related types, which will come later on.
This adds two hypothetical variants to the Mu demo, "building" new capabilities
up from the basic Express and MongoDB app we start with:
1) The first variant is to leverage the Mu SDK for service discovery and
configuration, eliminating the "loosely typed" approach that is common with
Docker (e.g., having to fetch connection URLs from arguments, environment
variables, etc). This extends the Mu type system "into" the Docker container,
whereas previously it stopped at the boundary.
2) The second variant really takes this to the next level, embellishing the nice
DSL-like approach. In this model, infrastructure is expressed in code.
Instead of manually creating a volume, we will arrange for the mongodb/mongodb
to auto-mount a mu/x/fs/volume, as per [the cloud-neutral design doc](/docs/xcloud.md).
This just adds a simple voting app for the demo script. It leverages
a Node.js Express app for the frontend and a MongoDB database for the backend.
It conditionally provisions and attaches to an AWS Elastic Block Store volume
in the event that it's targeting an AWS cluster, and ephemeral storage otherwise.
This change separates the metadata specification from the targets doc,
since they are both meant to be consumable independent of one another
and will evolve at their own paces.
This sketches out some more details in the Metadata format, in addition to
starting to articulate what the AWS IaaS provider looks like. Sadly, more
questions than answers, but it's good to have the placeholders...
Simple developer scenarios can be done solely in terms of Stacks. However, to
support more sophisticated cases -- where multiplexing many Stacks onto a shared
set of resources is preferred (for cost savings, provisioning time savings,
control, etc) -- we need to introduce the concept of a Cluster. Each Cluster has
a fixed combination of IaaS/CaaS provider, decided at provisioning time.
This just contains primarily scaffolding, however lays out a general direction
and table of contents for the metadata specification document (marapongo/mu#1).
This doc will describe the overall architecture for the Mu tools, platform,
and related artifacts. It will be a companion to the more detailed drill-down
into the Mufile format and its specific target transformations (marapongo/mu#1).
There are tons of todos, etc., however this is at least a start, with some
amount of conceptual overview plus scaffolding and structure to fill out.
This was really just a brainstorming exercise that helped get to the final
model we landed on. I'm archiving it for posterity. I'll decide whether to
carry it forward or delete it and leave it as an artifact for future reference
as I make more progress on the overall architecture and file format specification.
This snapshots thinking as of maybe two weeks ago. I'm preparing to
do a big overhaul of all of the write-ups, and specifically start on the
Mu.yaml file format spec, and so I want to archive everything that's in flight.
This was an interesting example I found in the Mulesoft GitHub repos. It seemed like
a perfect candidate for Riff/Mu/Lambda, so I sketched out what it might look like.