Specify more of the Mu project management "flow"

This commit is contained in:
joeduffy 2016-10-11 14:24:33 -07:00
parent 4d83e15685
commit 1eb06f86a6

106
README.md
View file

@ -52,7 +52,6 @@ A more comprehensive Mu program might look something like this:
mu.on(salesforce.customer.added, function(req, res) {...});
mu.on(marketo.customer.deleted, function(req, res) {...});
Now things have gotten interesting! This example demonstrates a few ways to register a serverless function:
1. **Functions**: The `func` routine registers a function with a name. Although they aren't automatically hooked up to
@ -69,5 +68,108 @@ Now things have gotten interesting! This example demonstrates a few ways to reg
4. **Triggers**: Lastly, `on` subscribes to a named event -- there are many to choose from! -- and runs the function
with its payload anytime that event occurs. Streams-based events that automatically batch are also available.
## Installation
Mu will manage deploying, wiring up, and running all of these functions. Below we will see how.
## Installing Mu
## Managing Mu Projects
Managing your Mu projects and deployments is easy to do with the `mu` command line.
The simplest case is when a single Git repo contains a Mu package. In that case, simply run:
$ mu init
This registers with the Mu Cloud so that anytime changes to your Git repo are published, automatic CI/CD processes
will provision, test, and, provided that works, deploy your changes to your public cloud of choice.
Of course, all of this can be done manually, if you do not wish to use the Mu Cloud for management.
For illustrative purposes, let's break it down.
First, we can initialize a Mu project without attaching to the Mu Cloud:
$ mu init --detached
If you later decide to use the Mu Cloud service, we can login and then attach our current project:
$ mu login
$ mu attach
The `init` command provisions the metadata for your project in the form of a `mu.yaml` file. In our case, this will
start off minimally, just with some handy package manager-like metadata (like name, description, and language).
It's possible to run Mu functions locally. For example, to run the `"hello"` function from above, just type:
$ mu run --func hello
This executes the function a single time. To pass a payload to it, you may use stdin, a literal, or a filename:
$ mu run --func hello - < payload.json
$ mu run --func hello --in "{ \"some\": \"data\" }"
$ mu run --func hello --in @payload.json
// TODO(joe): more examples; e.g., HTTP endpoints, schedules, triggers, etc.
Notice that we are running functions directly. To instead activate a project's routes, run the following command:
$ mu listen
This fires up all routes and awaits the stimuli that runs them. This tests out your project end-to-end; hit ^C to stop
awaiting. If you'd like to activate specific routes, simply list them by type, name, or both:
$ mu listen --http # run all HTTP endpoints
$ mu listen --http "/login" # run just the /login HTTP endpoint
$ mu listen --http --schedules # run all HTTP endpoints and schedules
// TODO(joe): more examples.
Finally, if you are using Mu Tests, you can run them locally to validate your changes:
$ mu test
// TODO(joe): more details; run specific subsets of tests, integration with other test frameworks, etc.
All of this is running locally on your machine. Of course, once we are ready to try it out in our production
environment, we need a way of deploying the changes. Mu handles this for us too. Although the Mu Cloud mentioned
earlier does everything in a turnkey style, we can break apart the steps and perform them by hand.
The first step is to build your package's metadata:
$ mu build
This step gathers up all of the metadata necessary to fully map out all of the routes, etc. defined in code.
The next step is to perform a deployment. The specific steps undertaken will differ based on the target. For
example, when deploying to AWS, the steps are governed by the AWS API Gateway and Lambda metadata formats. This step
is "intelligent" in that, by default, it only replaces and updates elements that have changed since last time.
If we are deploying to AWS, for example, we would run the following command:
$ mu deploy --provider aws
Assuming you have your [AWS credentials configured properly](
http://docs.aws.amazon.com/cli/latest/topic/config-vars.html), this step will perform a deployment. As it goes, it will
print out the resources that are created or destroyed, in addition to any relevant endpoints. If you wish to try out
the command without actually modifying your environment, you can use `--dry-run`:
$ mu deploy --provider aws --dry-run
// TODO(joe): link to the more sophistciated deployment options, e.g. using Terraform.
Mu also supports the notion of multiple environments (production, staging, test, etc). If you are using the Mu
Cloud-managed deployment option, you may attach to any number of branches, each of which will get its own isolated
environment. To do so, simply change branches, and run the `mu attach` command:
$ mu checkout -b stage
$ mu attach
If you are performing deployments by hand, you may specify the environment name in the `deploy` command:
$ mu deploy --provider aws --environment stage
// TODO(joe): it seems unwise to assume "production" is the default. Maybe this should be configurable too.
In addition to all of those commands, you can list what's in production (`mu ls`), what is actively running (`mu ps`),
and obtain logs or performance metrics for functions that have run or are running (`mu logs` and `mu metrics`).