Commit graph

23 commits

Author SHA1 Message Date
joeduffy 2daea4c3d8 Clarify aspects of using the DCO 2017-06-26 14:46:34 -07:00
joeduffy 3c1041af49 Update license headers 2017-06-23 14:53:41 -07:00
joeduffy 8b57310854 Tidy up more lint
This change fixes a few things:

* Most importantly, we need to place a leading "." in the paths
  to Gometalinter, otherwise some sub-linters just silently skip
  the directory altogether.  errcheck is one such linter, which
  is a very important one!

* Use an explicit Gometalinter.json file to configure the various
  settings.  This flips on a few additional linters that aren't
  on by default (line line length checking).  Sadly, a few that
  I'd like to enable take waaaay too much time, so in the future
  we may consider a nightly job (this includes code similarity,
  unused parameters, unused functions, and others that generally
  require global analysis).

* Now that we're running more, however, linting takes a while!
  The core Lumi project now takes 26 seconds to lint on my laptop.
  That's not terrible, but it's long enough that we don't want to
  do the silly "run them twice" thing our Makefiles were previously
  doing.  Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
  to rely on the fact that grep returns 1 on "zero lines".

* Finally, fix the many issues that this turned up.

I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
2017-06-22 12:09:46 -07:00
joeduffy 5f9ed13069 Simplify Check; make it tolerant of computed values
This change simplifies the generated Check interface for providers.
Instead of

    Check(ctx context.Context, obj *T) ([]error, error)

where T is the resource type, we have

    Check(ctx context.Context, obj *T, property string) error

This is done so that we can drive the calls to Check one property
at a time, allowing us to skip any that are computed.  (Otherwise,
we may fail the verification erroneously.)

This has the added advantage that the Check implementations are
simpler and can simply return a single error.  Furthermore, the
generated RPC code handles wrapping the result, so we can just do

    return errors.New("bad");

rather than the previous reflection-laden junk

    return resource.NewFieldError(
        reflect.TypeOf(obj), awsservice.AWSResource_Property,
        errors.New("bad"))
2017-06-16 13:34:11 -07:00
joeduffy 9698309f2b Model resource ID and URN as output properties
This change exposes ID and URN properties on resources, as appropriate,
so that they may be read and used in Lumi scripts.
2017-06-14 17:00:13 -07:00
joeduffy db99092334 Implement mapper.Encode "for real"
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment).  It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.

During this, I took the opportunity to also clean up a few other things
that had been bugging me.  Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI.  Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type.  Even better, we
no longer require resource providers to deal with the mapper
infrastructure.  Instead, the `Check` function can simply return an
array of errors.  It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.

As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.

This completes pulumi/lumi#183.
2017-06-05 17:49:00 -07:00
joeduffy e2cb211d93 Enable parallel tests
This change enables parallelism for our tests.

It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
2017-06-01 14:01:26 -07:00
joeduffy 08ca40c6c6 Unmap properties on the receive side
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary.  The issues here are twofold:

    1. Properties need to get unmapped using a JSON-tag-sensitive
       marshaler, so that they are cased properly, etc.  For that, we
       have a new mapper.Unmap function (which is ultra lame -- see
       pulumi/lumi#138).

    2. We have the reverse problem with respect to resource IDs: on
       the send side, we must translate from URNs (which the engine
       knows about) and provider IDs (which the provider knows about);
       similarly, then, on the receive side, we must translate from
       provider IDs back into URNs.

As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
2017-06-01 08:39:48 -07:00
joeduffy 0a72d5360a Modify provider creates; use get for outs
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so.  As a result, this change takes an initial
whack at implementing Get on all existing AWS resources.  The Get API needs
to return a fully populated structure containing all inputs and outputs.

Believe it or not, this is actually part of pulumi/lumi#90.

This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.

This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties.  For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
2017-06-01 08:36:43 -07:00
joeduffy 4108c51549 Reclassify Lumi under the Apache 2.0 license
This is part of pulumi/lumi#147.
2017-05-18 14:51:52 -07:00
joeduffy dafeb77dff Rename Coconut to Lumi
This is part of pulumi/coconut#147.

After it has landed, I will rename the repo on GitHub.
2017-05-18 11:38:28 -07:00
joeduffy f429bc6a0c Use github.com/pkg/errors for errors
This change moves us over to the github.com/pkg/errors package to
encourage the addition of more context associated with failures.
2017-04-19 14:46:50 -07:00
joeduffy 9c1ea1f161 Fix some poor hygiene
A few linty things crept in; this addresses them.
2017-04-08 07:44:02 -07:00
joeduffy 95f59273c8 Update copyright notices from 2016 to 2017 2017-03-14 19:26:14 -07:00
joeduffy 80d19d4f0b Use the object mapper to reduce provider boilerplate
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one.  As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
2017-03-12 14:13:44 -07:00
joeduffy 1bdd24395c Recognize TextUnmarshaler and use it
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
2017-02-26 11:51:38 -08:00
joeduffy fbb56ab5df Coconut! 2017-02-25 07:25:33 -08:00
joeduffy 53cf9f8b60 Tidy up a few things
* Print a pretty message if the plan has nothing to do:

        "info: nothing to do -- resources are up to date"

* Add an extra validation step after reading in a snapshot,
  so that we detect more errors sooner.  For example, I've
  fed in the wrong file several times, and it just chugs
  along as though it were actually a snapshot.

* Skip printing nulls in most plan outputs.  These just
  clutter up the output.
2017-02-24 16:44:46 -08:00
joeduffy c8044b66ce Fix up a bunch of golint errors 2017-01-27 15:42:39 -08:00
joeduffy 289a0d405d Revive some compiler tests
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.

It also fixes up some bugs uncovered during this:

* Don't claim that a symbol's kind is incorrect in the binder error
  message when it wasn't found.  Instead, say that it was missing.

* Do not attempt to compile if an error was issued during workspace
  resolution and/or loading of the Mufile.  This leads to trying to
  load an empty path and badness quickly ensues (crash).

* Issue an error if the Mufile wasn't found (this got lost apparently).

* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
  since missing names are now caught by our new fancy decoder that
  understands required versus optional fields.  We still need to guard
  against illegal characters in the name, including the empty string "".

* During decoding, reject !src.IsValid elements.  This represents the
  zero value and should be treated equivalently to a missing field.

* Do not permit empty strings "" as Names or QNames.  The old logic
  accidentally permitted them because regexp.FindString("") == "", no
  matter the regex!

* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
  allowing us to share this common code across multiple package tests.

* Fix up a few messages that needed tidying or to use Infof vs. Info.

The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
2017-01-26 15:30:08 -08:00
joeduffy 25632886c8 Begin overhauling semantic phases
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler.  A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.

The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:

    * Split up the notion of symbols and tokens, resulting in:

        - pkg/symbols for true compiler symbols (bound nodes)
        - pkg/tokens for name-based tokens, identifiers, constants

    * Several packages move underneath pkg/compiler:

        - pkg/ast becomes pkg/compiler/ast
        - pkg/errors becomes pkg/compiler/errors
        - pkg/symbols becomes pkg/compiler/symbols

    * pkg/ast/... becomes pkg/compiler/legacy/ast/...

    * pkg/pack/ast becomes pkg/compiler/ast.

    * pkg/options goes away, merged back into pkg/compiler.

    * All binding functionality moves underneath a dedicated
      package, pkg/compiler/binder.  The legacy.go file contains
      cruft that will eventually go away, while the other files
      represent a halfway point between new and old, but are
      expected to stay roughly in the current shape.

    * All parsing functionality is moved underneath a new
      pkg/compiler/metadata namespace, and we adopt new terminology
      "metadata reading" since real parsing happens in the MetaMu
      compilers.  Hence, Parser has become metadata.Reader.

    * In general phases of the compiler no longer share access to
      the actual compiler.Compiler object.  Instead, shared state is
      moved to the core.Context object underneath pkg/compiler/core.

    * Dependency resolution during binding has been rewritten to
      the new model, including stashing bound package symbols in the
      context object, and detecting import cycles.

    * Compiler construction does not take a workspace object.  Instead,
      creation of a workspace is entirely hidden inside of the compiler's
      constructor logic.

    * There are three Compile* functions on the Compiler interface, to
      support different styles of invoking compilation: Compile() auto-
      detects a Mu package, based on the workspace; CompilePath(string)
      loads the target as a Mu package and compiles it, regardless of
      the workspace settings; and, CompilePackage(*pack.Package) will
      compile a pre-loaded package AST, again regardless of workspace.

    * Delete the _fe, _sema, and parsetree phases.  They are no longer
      relevant and the functionality is largely subsumed by the above.

...and so very much more.  I'm surprised I ever got this to compile again!
2017-01-18 12:18:37 -08:00
joeduffy 01658d04bb Begin merging MuPackage/MuIL into the compiler
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".

In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output.  Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.

An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.

The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object.  This is more natural anyway and moves some of the detection
logic "outside" of the Compiler.  Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.

Finally, all that templating crap is gone.  This alone is cause
for mass celebration!
2017-01-17 17:04:15 -08:00
joeduffy 1948380eb2 Add custom decoders to eliminate boilerplate
This change overhauls the approach to custom decoding.  Instead of decoding
the parts of the struct that are "trivial" in one pass, and then patching up
the structure afterwards with custom decoding, the decoder itself understands
the notion of custom decoder functions.

First, the general purpose logic has moved out of pkg/pack/encoding and into
a new package, pkg/util/mapper.  Most functions are now members of a new top-
level type, Mapper, which may be initialized with custom decoders.  This
is a map from target type to a function that can decode objects into it.

Second, the AST-specific decoding logic is rewritten to use it.  All AST nodes
are now supported, including definitions, statements, and expressions.  The
overall approach here is to simply define a custom decoder for any interface
type that will occur in a node field position.  The mapper, upon encountering
such a type, will consult the custom decoder map; if a decoder is found, it
will be used, otherwise an error results.  This decoder then needs to switch
on the type discriminated kind field that is present in the metadata, creating
a concrete struct of the right type, and then converting it to the desired
interface type.  Note that, subtly, interface types used only for "marker"
purposes don't require any custom decoding, because they do not appear in
field positions and therefore won't be encountered during the decoding process.
2017-01-16 09:41:26 -08:00