Commit graph

226 commits

Author SHA1 Message Date
Pat Gavlin 97e38bddc8 Enable distributed tracing.
These changes add support for injecting client tracing spans into HTTP
requests to the Pulumi API. The server can then rematerialize these span
references and attach its own spans for distributed tracing.
2018-05-09 11:43:09 -07:00
Pat Gavlin 31e829c4b7 Simplify cmdutil.InitTracing.
If no tracing endpoint was specified, just use the default no-op tracer.
2018-05-07 20:17:19 -07:00
Pat Gavlin 97ace29ab1
Begin tracing Pulumi API calls. (#1330)
These changes enable tracing of Pulumi API calls.

The span with which to associate an API call is passed via a
`context.Context` parameter. This required plumbing a
`context.Context` parameter through a rather large number of APIs,
especially in the backend.

In general, all API calls are associated with a new root span that
exists for essentially the entire lifetime of an invocation of the
Pulumi CLI. There were a few places where the plumbing got a bit hairier
than I was willing to address with these changes; I've used
`context.Background()` in these instances. API calls that receive this
context will create new root spans, but will still be traced.
2018-05-07 18:23:03 -07:00
CyrusNajmabadi 092696948d
Restore streaming of plugin outputs to the progress display. (#1333) 2018-05-07 15:11:52 -07:00
Matt Ellis 7b27f00602 Fix python package versions
Our logic for converting npm style versions to PEP-440 style versions
was not correct in some cases. This change fixes this.

As part of this change we no longer produce a NPM version that would
be just X.Y.Z-dev, instead for development versions we always include
both the timestamp of the commit and the commit hash.

Instead of trying to use a bunch of sed logic to do our conversions,
we now have a small go program that uses a newly added library in
pkg/util. A side effect of this is that we can more easily write tests
to ensure the conversion works as expected.

Fixes #1243
2018-05-07 12:38:08 -07:00
Pat Gavlin ded25b11db
Add support for CPU and heap profiling. (#1318)
These changes add a new flag to the CLI, `--profiling`, that enables
CPU and heap profiling as well as execution tracing of the CLI itself.
The argument to this flag serves as a prefix for the profile outputs;
the CPU and heap profiles and execution trace are written to
`[filename].[pid].{cpu,mem,trace}`, respectively.

These changes also fix an issue with `cmdutil.RunFunc` wherein any error
would prevent the command's post-run hooks from executing.
2018-05-04 11:26:53 -07:00
Matt Ellis e6488ac17e Only show emojis on macOS
While emojis often work in the console on many newer Linux distros,
follow yarn's lead and only enable them on macOS (see
https://github.com/yarnpkg/yarn/pull/415).
2018-04-26 18:28:07 -07:00
Matt Ellis 8f0dff3220 Minor CLI improvements
- Show Emojis on non-Windows platforms, instead of just macOS

- Change help text for `pulumi logs` to clarify the logs are specific
  to a stack, not a project

- Display stack name when showing logs

- Have preamble text show to users for engine operations read a little
  nicer
2018-04-25 16:52:31 -07:00
Matt Ellis cc938a3bc8 Merge remote-tracking branch 'origin/master' into ellismg/identity 2018-04-20 01:56:41 -04:00
Pat Gavlin 4fa69bfd72
Plumb basic cancellation through the engine. (#1231)
hese changes plumb basic support for cancellation through the engine.
Two types of cancellation are supported for all engine operations:
- Cancellation, which waits for the operation to drive itself to a safe
  point before the operation returns, and
- Termination, which does not wait for the operation to drive itself
  to a safe opint for the operation returns.

When updating local or managed stacks, a single ^C triggers cancellation
of any running operation; a second ^C will trigger termination.

Fixes #513, #1077.
2018-04-19 18:59:14 -07:00
pat@pulumi.com 87d629aafe Remove the convutil package.
This code is unused.
2018-04-19 12:21:33 -07:00
Matt Ellis bac02f1df1 Remove the need to pulumi init for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.

Previously, `pulumi init` logically did two things:

1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.

2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com

The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.

However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.

For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.

For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.

This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).

With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-18 04:53:49 -07:00
CyrusNajmabadi 8b4f3a43ec
Don't synthesize a new error to display when we've already emitted a diagnostic error. (#1193) 2018-04-13 16:25:24 -07:00
Matt Ellis 50843a98c1 Retry some HTTP operations
We've seen failures in CI where DNS lookups fail which cause our
operations against the service to fail, as well as other sorts of
timeouts.

Add a set of helper methods in a new httputil package that helps us do
retries on these operations, and then update our client library to use
them when we are doing GET requests. We also provide a way for non GET
requests to be retried, and use this when updating a lease (since it
is safe to retry multiple requests in this case).
2018-04-11 14:58:25 -07:00
CyrusNajmabadi a759f2e085
Switch to a resource-progress oriented view for pulumi preview/update/destroy (#1116) 2018-04-10 12:03:11 -07:00
Pat Gavlin 716d6fdea4 Remove pkg/util/rendezvous.
This package is unused.
2018-04-04 12:52:51 -07:00
Sean Gillespie a3a6101e79
Improve the error message arising from missing required configs for resource providers (#1097)
* Improve the error message arising from missing required configs for
resource providers

If the resource provider that we are speaking to is new enough, it will send
across a list of keys and their descriptions alongside an error
indicating that the provider we are configuring is missing required
config. This commit packages up the list of missing keys into an error
that can be presented nicely to the user.

* Code review feedback: renaming simplification and correcting errors in comments
2018-04-04 10:08:17 -07:00
Sean Gillespie 91c550f1e0
Send structured errors across RPC boundaries (#1072)
* Send structured errors across RPC boundaries

This brings us closer to gRPC best practices where we send structured
errors with error codes across RPC endpoints. The new "rpcerrors"
package can wrap errors from RPC endpoints, so RPC servers can attach
some additional context as to why a request failed.

* Code review feedback:

1. Rename rpcerrors -> rpcerror, better package name
2. Rename RPCError -> Error, RPCErrorCause -> ErrorCause, names
suggested by gometalinter to improve their package-qualified names
3. Fix import organization in rpcerror.go
2018-03-28 17:07:35 -07:00
Pat Gavlin a23b10a9bf
Update the copyright end date to 2018. (#1068)
Just what it says on the tin.
2018-03-21 12:43:21 -07:00
joeduffy c737a3c89a Add a package apitype comment
Addresses feedback from @lukehoban.
2018-02-28 13:24:38 -08:00
Joe Duffy fa2377dc61
Fix spinner erasure (#979)
The spinner code used \b, but didn't overwrite with spaces, so part
of the message could get left behind when other writes to stdout/err
occurred.  This change simply overwrites characters with spaces.
2018-02-23 19:36:44 -08:00
Joe Duffy 811759fb77
Fix missing emojis on Windows (#966)
I was reminded of this yesterday with unprintable characters as I
debugged some things on Windows.  Inspired by Yarn, this change adds
a new flag --emoji (-e for short) that can be used to control whether
we show ASCII-only characters or not in the console.  On Mac, it
defaults to true, and on Windows and Linux, it defaults to false.

This also brings back the retro ASCII-friendly progress spinner
when --emoji is disabled.
2018-02-21 09:42:06 -08:00
Joe Duffy 776a76dffd
Make some stack-related CLI improvements (#947)
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.

This includes:

* If `pulumi stack select` is run by itself, use an interactive
  CLI menu to let the user select an existing stack, or choose to
  create a new one.  This looks as follows

      $ pulumi stack select
      Please choose a stack, or choose to create a new one:
        abcdef
        babblabblabble
      > currentlyselected
        defcon
        <create a new stack>

  and is navigated in the usual way (key up, down, enter).

* If a stack name is passed that does not exist, prompt the user
  to ask whether s/he wants to create one on-demand.  This hooks
  interesting moments in time, like `pulumi stack select foo`,
  and cuts down on the need to run additional commands.

* If a current stack is required, but none is currently selected,
  then pop the same interactive menu shown above to select one.
  Depending on the command being run, we may or may not show the
  option to create a new stack (e.g., that doesn't make much sense
  when you're running `pulumi destroy`, but might when you're
  running `pulumi stack`).  This again lets you do with a single
  command what would have otherwise entailed an error with multiple
  commands to recover from it.

* If you run `pulumi stack init` without any additional arguments,
  we interactively prompt for the stack name.  Before, we would
  error and you'd then need to run `pulumi stack init <name>`.

* Colorize some things nicely; for example, now all prompts will
  by default become bright white.
2018-02-16 15:03:54 -08:00
Joe Duffy 55e4dbe835
Update spinner to use modern ASCII/emoji art (#942) 2018-02-15 18:22:17 -08:00
Sean Gillespie 402a599fc7
Don't use shebangs to launch providers and correctly kill child process trees on Unix (#934)
* Don't use shebangs to launch providers and correctly kill child process trees on Unix

* Link to relevant documentation
2018-02-14 13:56:07 -08:00
Matt Ellis 5f23a9837a Permit setting multi line config from stdin
When reading a configuration value from standard in and standard in is
not connected to a terminal, read until EOF and then trim a trailing
newline (if present) to get the value

Fixes #822
2018-02-12 15:13:19 -08:00
Matt Ellis 818246a708 Allow control of uploaded archive root in Pulumi.yaml
Previously, when uploading a projectm to the service, we would only
upload the folder rooted by the Pulumi.yaml for that project. This
worked well, but it meant that customers needed to structure their
code in a way such that Pulumi.yaml was always as the root of their
project, and if they wanted to share common files between two projects
there was no good solution for doing this.

This change introduces an optional piece of metadata, named context,
that can be added to Pulumi.yaml, which allows controlling the root
folder used for computing the root folder to archive from.  When it is
set, it is combined with the location of the Pulumi.yaml file for the
project we are uploading and that folder is uses as the root of what
we upload to the service.

Fixes: #574
2018-01-31 16:22:58 -08:00
Matt Ellis b1496f3051 Remove Document and Location
These types are no longer used as pulumi no longer reads and evaluates
source code.

Contributes to #441
2018-01-30 16:42:39 -08:00
Matt Ellis ce05cce77f Provide a rudimentary progress spinner
Previously, the `pulumi` tool did not show any indication of progress
when doing a deployment. Combined with the fact that we do not create
resources in parallel it meant that sometime `pulumi` would appear to
hang, when really it was just waiting on some resource to be created
in AWS. In addition, some AWS resources take a long time to create and
CI systems like travis will kill the job if there is no output. This
causes us (and our customers) to have to do crazy dances where we
launch shell scripts that write a dot to the console every once in a
while so we don't get killed. While we plan to overhaul the output
logic (see #617), we take a first step towards interactivity by simply
having a nice little spinner (in the interactive case) and when run
non interactive have `pulumi` print a message that it is still
working.

Fixes #794
2018-01-22 14:21:08 -08:00
Matt Ellis c506549a25 Remove MustFprintf in favor of explicitly dropping errors
In travis, we've seen cases where writes to our standard streams
results in an error like: `/dev/stderr: resource temporarily
unavailable` which causes the tests to panic.

Now, in a perfect world, writes to /dev/stderr would not fail in this
way, but we do not live in a perfect world. Other processes on the
machine may make stderr/stdout non-blocking. We've are now seeing this
failure in Travis more often and it is masking real Pulumi failures
we want to debug.
2018-01-16 18:33:44 -08:00
Matthew Riley 9e3976513c AssertNoError instead of Assert(err == nil)
This may not be exhaustive, but I replaced all instances I could find.
2018-01-08 13:46:21 -08:00
Matt Ellis c052e24370
Merge pull request #771 from pulumi/no-all-for-secrets
Do not allow encrypted global configuration
2018-01-03 15:04:00 -08:00
joeduffy b3ee139b91 Fix error message for stack select failures 2017-12-28 16:51:53 -08:00
Matt Ellis f510f3c914 Do not allow encrypted global configuration
The cloud backend does not support this because it computes an
encryption key per stack, so we should not support this in the CLI.

Fixes #770
2017-12-27 19:00:55 -08:00
Joe Duffy d419229301
Add additional linting (#768)
This adds additional linting checks.  Most importantly, it will
check calls to our custom format routines for missing arguments.
2017-12-27 17:10:12 -08:00
joeduffy 87079589f1 Use the retry framework for REST API retries
This change incorporates feedback on https://github.com/pulumi/pulumi/pull/764,
in addition to refactoring the retry logic to use our retry framework rather
than hand-rolling it in the REST API code.  It's a minor improvement, but at
least lets us consolidate some of this logic which we'll undoubtedly use more
of over time.
2017-12-26 10:24:08 -08:00
Joe Duffy f0c28db639
Attempt to fix colorization (#740)
Our recent changes to colorization changed from a boolean to a tri-valued
enum (Always, Never, Raw).  The events from the service, however, are still
boolean-valued.  This changes the message payload to carry the full values.
2017-12-18 11:42:32 -08:00
CyrusNajmabadi e4946a6620
Allow users to control if and how output is colorized. (#718)
Part of the work to make it easier to tests of diff output.  Specifically, we now allow users to pass --color=option for several pulumi commands.  'option' can be one of 'always', 'never', 'raw', and 'auto' (the default).  

The meaning of these flags are:

1. auto: colorize normally, unless in --debug 
2. always: always colorize no matter what
3. never: never colorize no matter what.
4. raw: colorize, but preserve the original "<{%%}>" style control codes and not the translated platform specific codes.   This is for testing purposes and ensures we can have test for this stuff across platform.
2017-12-14 11:53:02 -08:00
Joe Duffy 971f6189f2
Fix pending delete replacement failure (#658)
The two-phase output properties change broke the ability to recover
from a failed replacement that yields pending deletes in the checkpoint.
The issue here is simply that we should remember pending registrations
only for logical operations that *also* have a "new" state (create or
update).  This commit fixes this, and also adds a new step test with
fault injection to probe many interesting combinations of steps.
2017-12-07 09:44:38 -08:00
Matt Ellis ffdee8bb23 Relax error check when trying to kill child processes
If the process we are trying to kill has already exited, don't treat
this as an error. This can happen when we snapshot the process tree
before the process exits but it has exited by the time we get to
trying to kill it.

Fixes #654
2017-12-06 09:52:26 -08:00
joeduffy 1c4e41b916 Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.

Now whether a stack is local or cloud is inherent to the stack
itself.  If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally.  Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.

For example, to initialize a new cloud stack, simply:

    $ pulumi login
    Logging into Pulumi Cloud: https://pulumi.com/
    Enter Pulumi access token: <enter your token>
    $ pulumi stack init my-cloud-stack

Note that you may log into a specific cloud if you'd like.  For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:

    $ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873

The cloud is now the default.  If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:

    $ pulumi stack init my-faf-stack --local

If you are logged in and run `pulumi`, we tell you as much:

    $ pulumi
    Usage:
      pulumi [command]

    // as before...

    Currently logged into the Pulumi Cloud ☁️
        https://pulumi.com/

And if you list your stacks, we tell you which one is local or not:

    $ pulumi stack ls
    NAME            LAST UPDATE       RESOURCE COUNT   CLOUD URL
    my-cloud-stack  2017-12-01 ...    3                https://pulumi.com/
    my-faf-stack    n/a               0                n/a

And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.

I shall write up more details and make sure to document these changes.

This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends.  The following is the overall resulting package architecture:

* The backend.Backend interface can be implemented to substitute
  a new backend.  This has operations to get and list stacks,
  perform updates, and so on.

* The backend.Stack struct is a wrapper around a stack that has
  or is being manipulated by a Backend.  It resembles our existing
  Stack notions in the engine, but carries additional metadata
  about its source.  Notably, it offers functions that allow
  operations like updating and deleting on the Backend from which
  it came.

* There is very little else in the pkg/backend/ package.

* A new package, pkg/backend/local/, encapsulates all local state
  management for "fire and forget" scenarios.  It simply implements
  the above logic and contains anything specific to the local
  experience.

* A peer package, pkg/backend/cloud/, encapsulates all logic
  required for the cloud experience.  This includes its subpackage
  apitype/ which contains JSON schema descriptions required for
  REST calls against the cloud backend.  It also contains handy
  functions to list which clouds we have authenticated with.

* A subpackage here, pkg/backend/state/, is not a provider at all.
  Instead, it contains all of the state management functions that
  are currently shared between local and cloud backends.  This
  includes configuration logic -- including encryption -- as well
  as logic pertaining to which stacks are known to the workspace.

This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 14:34:42 -08:00
Chris Smith 454f946e8c
Wire Package.Main to the Pulumi Service. (#615)
This PR just wires the `Package.Main` field to the Pulumi Service (and in subsequent PRs, the `pulumi-service` and `pulumi-ppc` repos).

@joeduffy , should we just upload the entire `package.Package` type with the `UpdateProgramRequest` type? I'm not sure we want to treat that type as part of part of our public API surface area. But on the other hand, we'll need to mirror relevant fields in N places if we don't.
2017-11-30 08:14:47 -08:00
Matt Ellis 8f076b7cb3 Argument validation for CLI commands
Previously, we were inconsistent on how we handled argument validation
in the CLI. Many commands used cobra.Command's Args property to
provide a validator if they took arguments, but commands which did not
rarely used cobra.NoArgs to indicate this.

This change does two things:

1. Introduce `cmdutil.ArgsFunc` which works like `cmdutil.RunFunc`, it
wraps an existing cobra type and lets us control the behavior when an
arguments validator fails.

2. Ensure every command sets the Args property with an instance of
cmdutil.ArgsFunc. The cmdutil package defines wrapers for all the
cobra validators we are using, to prevent us from having to spell out
`cmduitl.ArgsFunc(...)` everywhere.

Fixes #588
2017-11-29 16:10:53 -08:00
joeduffy dcfe68ce0c Fix some lint errors 2017-11-29 12:15:16 -08:00
joeduffy 1a112535af Merge branch 'master' of github.com:pulumi/pulumi into resource_parenting_lite 2017-11-29 12:14:39 -08:00
joeduffy 5762f2d0a6 Merge remote-tracking branch 'origin/resource_parenting' into resource_parenting_lite 2017-11-28 11:03:34 -08:00
Matt Ellis 4f2c599485 Provide a way to opt out of default ignores
Outside of `.pulumiignore` we support a few "default" excludes that
try to push folks towards a pit of succes.

Previously, there was no way to opt out of these, which would be bad
if our  huristics caused something youto really care about to be
elided. With this change, we add an optional setting in Pulumi.yaml
that allows you to opt out of this behavior.

As part of the work, I changed .git to be one of these "default"
excludes instead of it only happening if you had a .pulumiignore file
in a directory
2017-11-22 12:13:44 -08:00
Matt Ellis e994f8394a Only archive production node_modules
Previously, we would archive every file in node_modules, however we
only actually needed the production modules. This change adds an
implicit ignorer when walking a directory that has a package.json file
in to exclude stuff under `node_modules` which is not pulled in via an
entry in the `dependencies` section of the package.json.

This change will also cause the archives that we upload to not include
either pulumi or any @pulumi/... packages, since they are not listed
in the  dependencies section ofa package.json. This is fine, since we
linked in the versions we have in the cloud anyway. When we move to
using npm for these packages (instead of linking) then they will be
included.
2017-11-21 17:00:49 -08:00
Matt Ellis 71ff079606 Include directory entries in archive
This won't be needed once pulumi/pulumi-ppc#95 has landed and we've
updated our PPCs, but for now always add directory entires (even if
the files would be excluded) for every directory.
2017-11-21 16:20:02 -08:00
Matt Ellis f953794363 Support .pulumiignore
When deploying a project via the Pulumi.com service, we have to upload
the entire "context" of your project to Pulumi.com. The context of the
program is all files in the directory tree rooted by the `Pulumi.yaml`
file, which will often contain stuff we don't want to upload, but
previously we had no control over what would be updated (and so folks
would do hacky things like delete folders before running `pulumi
update`).

This change adds support for `.pulumiignore` files which should behave
like `.gitignore`. In addition, we were not previously compressing
files when we added them to the zip archive we uploaded and now.

By default, every .pulumiignore file is treated as if it had an
exclusion for `.git/` at the top of the file (users can override this
by adding an explicit `!.git/` to their file) since it is very
unlikely for there to ever be a reason to upload the .git folder to
the service.

Fixes pulumi/pulumi-service#122
2017-11-21 12:09:18 -08:00
joeduffy 7e48e8726b Add (back) component outputs
This change adds back component output properties.  Doing so
requires splitting the RPC interface for creating resources in
half, with an initial RegisterResource which contains all of the
input properties, and a final CompleteResource which optionally
contains any output properties synthesized by the component.
2017-11-20 17:38:09 -08:00
joeduffy a591775409 Add a missing copyright header 2017-11-19 08:08:30 -08:00
Chris Smith 84cd810112
Move program uploads to the CLI (#571)
In an effort to improve performance and overall reliability, this PR moves the responsibility of uploading the Pulumi program from the Pulumi Service to the CLI. (Part of fixing https://github.com/pulumi/pulumi-service/issues/313.)

Previously the CLI would send (the dozens of MiB) program archive to the Service, which would then upload the data to S3. Now the CLI sends the data to S3 directly, avoiding the unnecessary copying of data around.

The Service-side API changes are in https://github.com/pulumi/pulumi-service/pull/323. I tested previews, updates, and destroys running the service and PPC on localhost.

The PR refactors how we handle the three kinds of program updates, and just unifies them into a single method. This makes the diff look crazy, but the code should be much simpler. I'm not sure what to do about supporting all the engine options for the Cloud-variants of Pulumi commands; I suspect that's something that should be handled at a later time.
2017-11-15 13:27:28 -08:00
Pat Gavlin 234f0816e5 Stop formatting output that should be raw.
These changes introduce a new field, `Raw`, to `diag.Message`. This
field indicates that the contents of the message are not a format string
and should not be rendered via `Sprintf` during stringification.

The plugin std{out,err} readers have been updated to use raw messages,
and the event reader in `pulumi` has been fixed s.t. it does not format
event payloads before display.

Fixes #551.
2017-11-14 11:26:41 -08:00
Luke Hoban af5298f4aa
Initial work on tracing support (#521)
Adds OpenTracing in the Pulumi engine and plugin + langhost subprocesses.

We currently create a single root span for any `Enging.plan` operation - which is a single `preview`, `update`, `destroy`, etc.

The only sub-spans we currently create are at gRPC boundaries, both on the client and server sides and on both the langhost and provider plugin interfaces.

We could extend this to include spans for any other semantically meaningful sections of compute inside the engine, though initial examples show we get pretty good granularity of coverage by focusing on the gRPC boundaries.

In the future, this should be easily extensible to HTTP boundaries and to track other bulky I/O like datastore read/writes once we hook up to the PPC and Pulumi Cloud.

We expose a `--trace <endpoint>` option to enable tracing on the CLI, which we will aim to thread through to subprocesses.

We currently support sending tracing data to a Zipkin-compatible endpoint.  This has been validated with both Zipkin and Jaeger UIs.

We do not yet have any tracing inside the TypeScript side of the JS langhost RPC interface.  There is not yet automatic gRPC OpenTracing instrumentation (though it looks like it's in progress now) - so we would need to manually create meaningful spans on that side of the interface.
2017-11-08 17:08:51 -08:00
Matt Ellis fd64125daf Aggregate process termination errors 2017-10-30 23:35:11 -07:00
Matt Ellis 95ee6d85f6 Kill plugin child processes as well on Windows
On windows, we have to indirect through a batch file to launch plugins,
which means when we go to close a plugin, we only kill cmd.exe that is
running the batch file and not the underlying node process. This
prevents `pulumi` from exiting cleanly. So on Windows, we also kill any
direct children of the plugin process

Fixes #504
2017-10-30 23:22:14 -07:00
Matt Ellis 3f1197ef84 Move .pulumi to root of a repository
Now, instead of having a .pulumi folder next to each project, we have
a single .pulumi folder in the root of the repository. This is created
by running `pulumi init`.

When run in a git repository, `pulumi init` will place the .pulumi
file next to the .git folder, so it can be shared across all projects
in a repository. When not in a git repository, it will be created in
the current working directory.

We also start tracking information about the repository itself, in a
new `repo.json` file stored in the root of the .pulumi folder. The
information we track are "owner" and "name" which map to information
we use on pulumi.com.

When run in a git repository with a remote named origin pointing to a
GitHub project, we compute the owner and name by deconstructing
information from the remote's URL. Otherwise, we just use the current
user's username and the name of the current working directory as the
owner and name, respectively.
2017-10-27 11:46:21 -07:00
Chris Smith 95062100f7 Enable pulumi update to target the Console (#461)
Adds `pulumi update` so you can deploy to the Pulumi Console (via PPC on the backend).

As per an earlier discussion (now lost because I rebased/squashed the commits), we want to be more deliberate about how to bifurcate "local" and "cloud" versions of every Pulumi command.

We can block this PR until we do the refactoring to have `pulumi` commands go through a generic "PulumiCloud" interface. But it would be nice to commit this so I can do more refining of the `pulumi` -> Console -> PPC workflow. 

Another known area that will need to be revisited is how we render the PPC events on the CLI. Update events from the PPC are generated in a different format than the `engine.Event`, and we'll probably want to change the PPC to emit messages in the same format. (e.g. how we handle coloring, etc.)
2017-10-25 10:46:05 -07:00
pat@pulumi.com ce18c8293b Do not trap signals in rpcutil.Serve.
Trapping these signals hijacks the usual termination behavior for any
program that happens to link in the engine and perform an operation
that starts a gRPC server. These servers already provide a cancellation
mechanism via a `cancel` channel parameter; if the using program wants
to gracefully terminate these servers on some signal, it is responsible
for providing that behavior.

This also fixes a leak in which the goroutine responsible for waiting on
a server's signals and cancellation channel would never exit.
2017-10-24 14:35:59 -07:00
joeduffy a7d99a0c80 Preserve Pulumi.yaml while applying edits
Now that config is stored in Pulumi.yaml, we need to mimic the behavior
around .pulumi/ while edits are applied.  This will ensure that config
values carry forward from the original program settings.

This fixes pulumi/pulumi-aws#48.
2017-10-23 05:27:26 -07:00
Joe Duffy 69f7f51375 Many asset improvements
This improves a few things about assets:

* Compute and store hashes as input properties, so that changes on
  disk are recognized and trigger updates (pulumi/pulumi#153).

* Issue explicit and prompt diagnostics when an asset is missing or
  of an unexpected kind, rather than failing late (pulumi/pulumi#156).

* Permit raw directories to be passed as archives, in addition to
  archive formats like tar, zip, etc. (pulumi/pulumi#240).

* Permit not only assets as elements of an archive's member list, but
  also other archives themselves (pulumi/pulumi#280).
2017-10-22 13:39:21 -07:00
joeduffy 37c7a955d7 Optionally emit stack traces for errors
If --logtostderr is passed, and an unhandled error occurs that
was produced by the github.com/pkg/errors package, we will now
emit the stack trace.  Much easier for debugging purposes.
2017-10-20 19:26:18 -07:00
Matt Ellis 7587bcd7ec Have engine emit "events" instead of writing to streams
Previously, the engine would write to io.Writer's to display output.
When hosted in `pulumi` these writers were tied to os.Stdout and
os.Stderr, but other applications hosting the engine could send them
other places (e.g. a log to be sent to an another application later).

While much better than just using the ambient streams, this was still
not the best. It would be ideal if the engine could just emit strongly
typed events and whatever is hosting the engine could care about
displaying them.

As a first step down that road, we move to a model where operations on
the engine now take a `chan engine.Event` and during the course of the
operation, events are written to this channel. It is the
responsibility of the caller of the method to read from the channel
until it is closed (singifying that the operation is complete).

The events we do emit are still intermingle presentation with data,
which is unfortunate, but can be improved over time. Most of the
events today are just colorized in the client and printed to stdout or
stderr without much thought.
2017-10-09 18:24:56 -07:00
Joe Duffy f6e694c72b Rename pulumi-fabric to pulumi
This includes a few changes:

* The repo name -- and hence the Go modules -- changes from pulumi-fabric to pulumi.

* The Node.js SDK package changes from @pulumi/pulumi-fabric to just pulumi.

* The CLI is renamed from lumi to pulumi.
2017-09-21 19:18:21 -07:00
Matt Ellis 25ae463915 Listen only on 127.0.0.1
Instead of binding on 0.0.0.0 (which will listen on every interface)
let's only listen on localhost. On windows, this both makes the
connection Just Work and also prevents the Windows Firewall from
blocking the listen (and displaying UI saying it has blocked an
application and asking if the user should allow it)
2017-09-21 10:56:45 -07:00
joeduffy f189c40f35 Wire up Lumi to the new runtime strategy
🔥 🔥 🔥  🔥 🔥 🔥

Getting closer on #311.
2017-09-04 11:35:21 -07:00
joeduffy 35aa6b7559 Rename pulumi/lumi to pulumi/pulumi-fabric
We are renaming Lumi to Pulumi Fabric.  This change simply renames the
pulumi/lumi repo to pulumi/pulumi-fabric, without the CLI tools and other
changes that will follow soon afterwards.
2017-08-02 09:25:22 -07:00
joeduffy 539ccc8f04 Add a --debug option to plan, deploy, and destroy
This change introduces a --debug option to the plan, deploy, and
destroy commands.  Unlike --logtostderr, which merely hooks into the
copious Glogging that we perform (and is therefore meant for developers
of the tools themselves and not end users), --debug hooks into the
user-facing debug stream.  This now includes any debug messages coming
from the resource providers as they perform their tasks.
2017-07-13 17:13:19 -07:00
joeduffy 23045c5792 Simply panic for failfast
The old contract library tried to be glog-friendly in its failfast behavior.
It turns out glog seldom does the right thing when goroutines are involved
(which, as of last sprint, they now are).  We already had issues with stacks
not getting printed when --logtostderr was turned on, and the code tried
to work around this; but this still didn't work for the goroutines case.

All of this seems like way too much cleverness.  Let's just use Go panics.
2017-06-27 11:12:06 -07:00
joeduffy 2daea4c3d8 Clarify aspects of using the DCO 2017-06-26 14:46:34 -07:00
joeduffy 3c1041af49 Update license headers 2017-06-23 14:53:41 -07:00
joeduffy 8b57310854 Tidy up more lint
This change fixes a few things:

* Most importantly, we need to place a leading "." in the paths
  to Gometalinter, otherwise some sub-linters just silently skip
  the directory altogether.  errcheck is one such linter, which
  is a very important one!

* Use an explicit Gometalinter.json file to configure the various
  settings.  This flips on a few additional linters that aren't
  on by default (line line length checking).  Sadly, a few that
  I'd like to enable take waaaay too much time, so in the future
  we may consider a nightly job (this includes code similarity,
  unused parameters, unused functions, and others that generally
  require global analysis).

* Now that we're running more, however, linting takes a while!
  The core Lumi project now takes 26 seconds to lint on my laptop.
  That's not terrible, but it's long enough that we don't want to
  do the silly "run them twice" thing our Makefiles were previously
  doing.  Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
  to rely on the fact that grep returns 1 on "zero lines".

* Finally, fix the many issues that this turned up.

I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
2017-06-22 12:09:46 -07:00
joeduffy 7fe8052941 Fix some lint in our lint
After 233c5a8 landed, I noticed there are a few things to be fixed up:

    * Run gometalinter in all the right places.  We need to run both in
      lint and lint_quiet targets.  I've also cleaned up some of the logic
      around what to suppress so there's less repetition.

    * We currently @ meaningful commands, which is unfortunate, since it
      makes debugging Makefiles tough (especially when looking at CI build
      logs).  Going forward, we should only use @ for meaningless commands,
      like @echo.

    * The AWS project wasn't actually running tslint, because it needs to
      say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
      The current script of `tslint lib/aws/pack/...` wasn't actually
      running lint, hence we missed a lot of AWS lint issues.

    * Fix up the issues that these fixes uncovered.  Mostly err shadowing.
2017-06-21 13:24:35 -07:00
joeduffy 97deabb9bd Finish interface for reading configuration¬
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface.  In summary:

    * Instead of using the NullSource for destructions -- which
      doesn't hook up an interpreter and so any reads of configuration
      variables will fail -- we will enlighten the EvalSource to know
      how to orchestrate destruction interpretation.  The primary
      difference is that we don't actually run the code, but *we do*
      perform all of the necessary configuration and variable init.

    * Associate the active interpreter with the plugin context as
      we are executing, so that the host object can actually read the
      state from the heap as requested to do so by attached plugins.

    * Rename anything "engine" related to use the term "host"; this
      avoids introducing unnecesarily new terminology.

    * Add a new pkg/resource/provider/ package where we can begin
      consolidating helper functionality for resource providers.
      Right now, this includes a wrapper interface atop the gRPC
      machinery necessary to contact the host, in addition to a
      Main function that hides some boilerplate entrypoint code.

    * Add a rpcutil.IsBenignCloseErr routine to let us ignore
      "benign" gRPC errors that are knowingly returned at shutdown.

This commit completes pulumi/lumi#117.
2017-06-21 10:31:06 -07:00
joeduffy d7093188f0 Introduce an interface to read config
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.

The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible.  We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
2017-06-20 19:45:07 -07:00
joeduffy 5f9ed13069 Simplify Check; make it tolerant of computed values
This change simplifies the generated Check interface for providers.
Instead of

    Check(ctx context.Context, obj *T) ([]error, error)

where T is the resource type, we have

    Check(ctx context.Context, obj *T, property string) error

This is done so that we can drive the calls to Check one property
at a time, allowing us to skip any that are computed.  (Otherwise,
we may fail the verification erroneously.)

This has the added advantage that the Check implementations are
simpler and can simply return a single error.  Furthermore, the
generated RPC code handles wrapping the result, so we can just do

    return errors.New("bad");

rather than the previous reflection-laden junk

    return resource.NewFieldError(
        reflect.TypeOf(obj), awsservice.AWSResource_Property,
        errors.New("bad"))
2017-06-16 13:34:11 -07:00
joeduffy d013501e04 Restructure Rendezvous to have a distinct Let vs. Meet
On the first turn, we want to distinguish between a coroutine
running that owns its turn, and a coroutine that knows it doesn't
own the turn and is simply awaiting its turn.  The old Meet logic
wasn't quite right; instead, we'll have the caller tell us this.
2017-06-15 18:20:12 -07:00
joeduffy 9698309f2b Model resource ID and URN as output properties
This change exposes ID and URN properties on resources, as appropriate,
so that they may be read and used in Lumi scripts.
2017-06-14 17:00:13 -07:00
joeduffy 792490e814 Fix the polarity of some asserts 2017-06-14 09:44:58 -07:00
Luke Hoban 282f40d3e3 Merge branch 'master' into bforsyth927-gometalinter 2017-06-13 16:28:12 -07:00
Britton Forsyth 01003ad48b Implemented highlighted edits 2017-06-13 11:01:23 -07:00
joeduffy d1414af321 Fix a few minor things; clean stuff up
* Assert new things in new places.

* Log more interesting tidbits during evaluation.

* Invoke the OnStart hook before triggering initializers.

* Tolerate nil prev snapshots during deletion calculation.

* Handle and serialize missing resource IDs as output props.

* Return "done" flag from Rendezvous.Meet.
2017-06-13 07:10:13 -07:00
joeduffy d044720045 Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.

The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.

The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine.  It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.

There are two other sources, however.  First, a nullSource, which simply
refuses to create new objects.  This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment.  Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.

Boatloads of code is now changed and updated in the various CLI commands.

This further chugs along towards pulumi/lumi#90.  The end is in sight.
2017-06-13 07:10:13 -07:00
joeduffy 6b2408e086 Rewrite plans and deployments
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.

The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
2017-06-13 07:10:13 -07:00
Britton Forsyth 7457cadf58 Fixed various additional linting issues 2017-06-08 10:21:17 -07:00
joeduffy db99092334 Implement mapper.Encode "for real"
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment).  It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.

During this, I took the opportunity to also clean up a few other things
that had been bugging me.  Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI.  Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type.  Even better, we
no longer require resource providers to deal with the mapper
infrastructure.  Instead, the `Check` function can simply return an
array of errors.  It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.

As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.

This completes pulumi/lumi#183.
2017-06-05 17:49:00 -07:00
joeduffy 43bcbed23d Tidy up project loading for pack commands
There are a few things that annoyed me about the way our CLI works with
directories when loading packages.  For example, `lumi pack info some/pack/dir/`
never worked correctly.  This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
2017-06-02 12:43:04 -07:00
joeduffy e2cb211d93 Enable parallel tests
This change enables parallelism for our tests.

It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
2017-06-01 14:01:26 -07:00
joeduffy 08ca40c6c6 Unmap properties on the receive side
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary.  The issues here are twofold:

    1. Properties need to get unmapped using a JSON-tag-sensitive
       marshaler, so that they are cased properly, etc.  For that, we
       have a new mapper.Unmap function (which is ultra lame -- see
       pulumi/lumi#138).

    2. We have the reverse problem with respect to resource IDs: on
       the send side, we must translate from URNs (which the engine
       knows about) and provider IDs (which the provider knows about);
       similarly, then, on the receive side, we must translate from
       provider IDs back into URNs.

As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
2017-06-01 08:39:48 -07:00
joeduffy 87ad371107 Only flow logging to plugins if --logflow
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors.  After this change, we will only flow if
--logflow is passed, e.g. as in

    $ lumi --logtostderr --logflow -v=9 deploy ...
2017-06-01 08:37:56 -07:00
joeduffy 0a72d5360a Modify provider creates; use get for outs
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so.  As a result, this change takes an initial
whack at implementing Get on all existing AWS resources.  The Get API needs
to return a fully populated structure containing all inputs and outputs.

Believe it or not, this is actually part of pulumi/lumi#90.

This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.

This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties.  For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
2017-06-01 08:36:43 -07:00
joeduffy ab6e2466c7 Flow logging information to plugins
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels.  I've
found this handy for debugging purposes.
2017-05-30 10:19:33 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy f429bc6a0c Use github.com/pkg/errors for errors
This change moves us over to the github.com/pkg/errors package to
encourage the addition of more context associated with failures.
2017-04-19 14:46:50 -07:00
joeduffy da75f62865 Retry lambda creation until IAM role is available
Per Amazon's own documentation,
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role,
IAM roles may take "several seconds" to propagate.  In the meantime, we
are apt to get the dreaded "role defined for this function cannot be assumed"
error message.  In response, we'll do what the AWS documentation suggests:
wait a bit and retry.
2017-04-18 13:56:19 -07:00
joeduffy 6b4cab557f Refactor glog init swizzle to a shared package 2017-04-13 05:27:45 -07:00
joeduffy ae1e43ce5d Refactor shared command bits into pkg/cmdutil
This paves the way for more Go-based command line tools that can
share some of the common utility functions around diagnostics and
exit codes.
2017-04-12 11:12:25 -07:00
joeduffy 9c1ea1f161 Fix some poor hygiene
A few linty things crept in; this addresses them.
2017-04-08 07:44:02 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 80d19d4f0b Use the object mapper to reduce provider boilerplate
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one.  As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
2017-03-12 14:13:44 -07:00
joeduffy b4b4d26844 Add pkg/util/rpcutil to cut down on some plugin boilerplate
This change eliminates some of the boilerplate required to create a
new plugin; mostly this is gRPC-related code.
2017-03-11 09:23:09 -08:00
joeduffy c633d0ceb0 Add "still waiting" messages to retries 2017-03-02 13:12:40 -08:00
joeduffy 1c43abffec Fix some go vet issues 2017-02-28 11:02:33 -08:00
joeduffy 1bdd24395c Recognize TextUnmarshaler and use it
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
2017-02-26 11:51:38 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy 53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy 43432babc5 Wait for SecurityGroups to become active 2017-02-23 10:18:37 -08:00
joeduffy d9e6d8a207 Add a simple retry.Until package/function
This change introduces a simple retry package which will be used in
resource providers in order to perform waits, backoffs, etc.  It's
pretty basic right now but leverages the fancy new Golang context
support for deadlines and timeouts.  Soon we will layer AWS-specific
acceptors, etc. atop this more general purpose thing.
2017-02-22 19:16:46 -08:00
joeduffy f00b146481 Echo resource provider outputs
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it.  In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error.  This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
2017-02-22 18:53:36 -08:00
joeduffy 32960be0fb Use export tables
This change redoes the way module exports are represented.  The old
mechanism -- although laudible for its attempt at consistency -- was
wrong.  For example, consider this case:

    let v = 42;
    export { v };

The old code would silently add *two* members, both with the name "v",
one of which would be dropped since the entries in the map collided.

It would be easy enough just to detect collisions, and update the
above to mark "v" as public, when the export was encountered.  That
doesn't work either, as the following two examples demonstrate:

    let v = 42;
    export { v as w };
    let x = w; // error!

This demonstrates:

    * Exporting "v" with a different name, "w" to consumers of the
      module.  In particular, it should not be possible for module
      consumers to access the member through the name "v".

    * An inability to access the exported name "w" from within the
      module itself.  This is solely for external consumption.

Because of this, we will use an export table approach.  The exports
live alongside the members, and we are smart about when to consult
the export table, versus the member table, during name binding.
2017-02-13 09:56:25 -08:00
joeduffy 6e472f8ab4 Colorize Mu output
This change adds colorization to the core Mu tool's output, similar
to what we added to MuJS in
cf6bbd460d.
2017-02-09 17:26:49 -08:00
joeduffy e14ca5f4ff Manually print the stack-trace when --logtostderr is enabled
Glog doesn't actually print out the stack traces for all goroutines,
when --logtostderr is enabled, the same way it normally does.  This
makes debugging more complex in some cases.  So, we'll manually do it.
2017-02-09 14:50:41 -08:00
joeduffy c8044b66ce Fix up a bunch of golint errors 2017-01-27 15:42:39 -08:00
joeduffy 289a0d405d Revive some compiler tests
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.

It also fixes up some bugs uncovered during this:

* Don't claim that a symbol's kind is incorrect in the binder error
  message when it wasn't found.  Instead, say that it was missing.

* Do not attempt to compile if an error was issued during workspace
  resolution and/or loading of the Mufile.  This leads to trying to
  load an empty path and badness quickly ensues (crash).

* Issue an error if the Mufile wasn't found (this got lost apparently).

* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
  since missing names are now caught by our new fancy decoder that
  understands required versus optional fields.  We still need to guard
  against illegal characters in the name, including the empty string "".

* During decoding, reject !src.IsValid elements.  This represents the
  zero value and should be treated equivalently to a missing field.

* Do not permit empty strings "" as Names or QNames.  The old logic
  accidentally permitted them because regexp.FindString("") == "", no
  matter the regex!

* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
  allowing us to share this common code across multiple package tests.

* Fix up a few messages that needed tidying or to use Infof vs. Info.

The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
2017-01-26 15:30:08 -08:00
joeduffy 25632886c8 Begin overhauling semantic phases
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler.  A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.

The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:

    * Split up the notion of symbols and tokens, resulting in:

        - pkg/symbols for true compiler symbols (bound nodes)
        - pkg/tokens for name-based tokens, identifiers, constants

    * Several packages move underneath pkg/compiler:

        - pkg/ast becomes pkg/compiler/ast
        - pkg/errors becomes pkg/compiler/errors
        - pkg/symbols becomes pkg/compiler/symbols

    * pkg/ast/... becomes pkg/compiler/legacy/ast/...

    * pkg/pack/ast becomes pkg/compiler/ast.

    * pkg/options goes away, merged back into pkg/compiler.

    * All binding functionality moves underneath a dedicated
      package, pkg/compiler/binder.  The legacy.go file contains
      cruft that will eventually go away, while the other files
      represent a halfway point between new and old, but are
      expected to stay roughly in the current shape.

    * All parsing functionality is moved underneath a new
      pkg/compiler/metadata namespace, and we adopt new terminology
      "metadata reading" since real parsing happens in the MetaMu
      compilers.  Hence, Parser has become metadata.Reader.

    * In general phases of the compiler no longer share access to
      the actual compiler.Compiler object.  Instead, shared state is
      moved to the core.Context object underneath pkg/compiler/core.

    * Dependency resolution during binding has been rewritten to
      the new model, including stashing bound package symbols in the
      context object, and detecting import cycles.

    * Compiler construction does not take a workspace object.  Instead,
      creation of a workspace is entirely hidden inside of the compiler's
      constructor logic.

    * There are three Compile* functions on the Compiler interface, to
      support different styles of invoking compilation: Compile() auto-
      detects a Mu package, based on the workspace; CompilePath(string)
      loads the target as a Mu package and compiles it, regardless of
      the workspace settings; and, CompilePackage(*pack.Package) will
      compile a pre-loaded package AST, again regardless of workspace.

    * Delete the _fe, _sema, and parsetree phases.  They are no longer
      relevant and the functionality is largely subsumed by the above.

...and so very much more.  I'm surprised I ever got this to compile again!
2017-01-18 12:18:37 -08:00
joeduffy 01658d04bb Begin merging MuPackage/MuIL into the compiler
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".

In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output.  Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.

An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.

The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object.  This is more natural anyway and moves some of the detection
logic "outside" of the Compiler.  Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.

Finally, all that templating crap is gone.  This alone is cause
for mass celebration!
2017-01-17 17:04:15 -08:00
joeduffy bbb60799f8 Add a Require family of functions to pkg/util/contract 2017-01-17 15:58:11 -08:00
joeduffy 1948380eb2 Add custom decoders to eliminate boilerplate
This change overhauls the approach to custom decoding.  Instead of decoding
the parts of the struct that are "trivial" in one pass, and then patching up
the structure afterwards with custom decoding, the decoder itself understands
the notion of custom decoder functions.

First, the general purpose logic has moved out of pkg/pack/encoding and into
a new package, pkg/util/mapper.  Most functions are now members of a new top-
level type, Mapper, which may be initialized with custom decoders.  This
is a map from target type to a function that can decode objects into it.

Second, the AST-specific decoding logic is rewritten to use it.  All AST nodes
are now supported, including definitions, statements, and expressions.  The
overall approach here is to simply define a custom decoder for any interface
type that will occur in a node field position.  The mapper, upon encountering
such a type, will consult the custom decoder map; if a decoder is found, it
will be used, otherwise an error results.  This decoder then needs to switch
on the type discriminated kind field that is present in the metadata, creating
a concrete struct of the right type, and then converting it to the desired
interface type.  Note that, subtly, interface types used only for "marker"
purposes don't require any custom decoding, because they do not appear in
field positions and therefore won't be encountered during the decoding process.
2017-01-16 09:41:26 -08:00
joeduffy 5f33292496 Move assertion/failure functions
This change just moves the assertion/failure functions from the pkg/util
package to pkg/util/contract, so things read a bit nicer (i.e.,
`contract.Assert(x)` versus `util.Assert(x)`).
2017-01-15 14:26:48 -08:00
joeduffy 11f7f4963f Go back to glog.Fail
Well, it turns out glog.Fail is slightly better than panic, because it explicitly
dumps the stacks of *all* goroutines.  This is especially good in logging scenarios.
It's really annoying that glog suppresses printing the stack trace (see here
https://github.com/golang/glog/blob/master/glog.go#L719), however this is better.
2016-12-01 11:50:19 -08:00
joeduffy 3a7fa9a983 Use panic for fail-fast
This change switches away from using glog.Fatalf, and instead uses panic,
should a fail-fast arise (due to a call to util.Fail or a failed assertion
by way of util.Assert).  This leads to a better debugging experience no
matter what flags have been passed to glog.  For example, glog.Fatal* seems
to suppress printing stack traces when --logtostderr is supplied.
2016-12-01 11:43:41 -08:00
joeduffy 5a8069e2fe Fix a silly message 2016-11-22 11:25:51 -08:00
joeduffy c20c151edf Use assertions in more places
This change mostly replaces explicit if/then/glog.Fatalf calls with
util.Assert calls.  In addition, it adds a companion util.Fail family
of methods that does the same thing as a failed assertion, except that
it is unconditional.
2016-11-19 16:13:13 -08:00
joeduffy 652305686d Add some simple assertion functions
This introduces three assertion functions:

* util.Assert: simply asserts a boolean condition, ripping the process using
  glog should it fail, with a stock message.

* util.AssertM: asserts a boolean condition, also using glog if it fails,
  but it comes with a message to help with debugging too.

* util.AssertMF: the same, except it formats the message with arguments.
2016-11-19 10:17:44 -08:00