Jot down a few more thoughts on how this might work

This commit is contained in:
joeduffy 2016-10-14 14:20:11 -07:00
parent 9aed189c5a
commit 36a94fa107

View file

@ -23,7 +23,7 @@ A rich ecosystem of Trigger events exists so that you can write reactive, server
managing whole Services. This includes the standard ones -- like CRUD operations in your favorite NoSQL database -- in
addition to more novel ones -- like SalesForce customer events -- to deliver a uniform event-driven programming model.
Here is a brief example of Stack that represents a voting service:
Here is a brief example of Stack that represents a voting service, authored in Node.js:
var mu = require("mu");
@ -61,3 +61,50 @@ This simple example demonstrates many facets:
3. Creating a custom stateless service, `VotingService`, that encapsulates cloud resources and exports a `vote` API.
4. Registering a function that runs in response to database updates using "reactive" APIs.
## A Teardown
Although a developer wrote very simple code in the introductory example, there is a fair bit of machinery behind making
it work. In fact, the specific details differ greatly depending on which cloud orchestration fabric you are targeting
(such as AWS native, Google Cloud native, Kubernetes, Docker Swarm, and so on); moreover, multiple backends are
available for some providers (such as AWS CloudFormation or Terraform when targeting AWS native deployments).
To illustrate how the projections work, let's pick a single provider: AWS native using CloudFormation.
The above example contains two Stacks:
1. The top-level Stack.
2. The inner Stack allocated by `VotingService`'s constructor.
Each of these maps to a single "Stack" in AWS's CloudFormation terminology. To generate them, run:
$ mu build ./voting_stack.js
Inside of each Stack, there are a number of resources. Let's first take a look at the top-level Stack:
1. A native AWS API Gateway.
2. A native AWS Lambda, containing the code for `vote` wired up to said API Gateway at `/vote`.
Next, the inner Stack allocated by `VotingService`:
1. Two native AWS DynamoDB "no-SQL" tables: votes and voteCounts.
2. A native AWS Lambda, containing the callback wired up to the votes DynamoDB table.
In this particular example, there is little advantage to having two Stacks, since we only ever create one
`VotingService`. It's important to remember, however, that Services can be multi-instanced, so they must remain
distinct. Of course, many AWS resources may be generated in like fashion: S3 buckets, Route53 DNS entries, and so on.
Furthermore, stateful Services will end up requiring EC2 VMs and/or Docker containers.
In addition to generating the metadata, the code is prepared for deployment. This includes some massaging of the code
so that it is in the requisite form (e.g., Docker images, S3 tarballs for AWS Lambdas, and so on).
If you were to change the code, rerunning `mu build` would regenerate the modified Stack. Leveraging the usual
techniques for applying diffs to an existing environment allows incremental changes to be made, rather than needing to
destroy and redeploy the entire cluster again. Blue green, staged deployments and high availability are both supported.
For simple scenarios, developers may not care what goes on behind the scenes. In those cases, just writing code like
the above and running the CLI is perfect. For complex scenarios, on the other hand -- particularly in multi-tenant
environments, hybrid or on-premise clouds, and/or when IT organizations want more control over things -- the contents of
this section become more important. In fact, organizations may wish to manage the cloud deployment artifacts more
intently, possibly even editing them by hand, and/or checking them into source control. Moreover, it's even possible to
author these definitions by hand and map them to the program using a `mu.yaml` file that sits in the middle.