This change fixes a few things:
* Most importantly, we need to place a leading "." in the paths
to Gometalinter, otherwise some sub-linters just silently skip
the directory altogether. errcheck is one such linter, which
is a very important one!
* Use an explicit Gometalinter.json file to configure the various
settings. This flips on a few additional linters that aren't
on by default (line line length checking). Sadly, a few that
I'd like to enable take waaaay too much time, so in the future
we may consider a nightly job (this includes code similarity,
unused parameters, unused functions, and others that generally
require global analysis).
* Now that we're running more, however, linting takes a while!
The core Lumi project now takes 26 seconds to lint on my laptop.
That's not terrible, but it's long enough that we don't want to
do the silly "run them twice" thing our Makefiles were previously
doing. Instead, we shall deploy some $$($${PIPESTATUS[1]}-1))-fu
to rely on the fact that grep returns 1 on "zero lines".
* Finally, fix the many issues that this turned up.
I think(?) we are done, except, of course, for needing to drive
down some of the cyclomatic complexity issues (which I'm possibly
going to punt on; see pulumi/lumi#259 for more details).
After 233c5a8 landed, I noticed there are a few things to be fixed up:
* Run gometalinter in all the right places. We need to run both in
lint and lint_quiet targets. I've also cleaned up some of the logic
around what to suppress so there's less repetition.
* We currently @ meaningful commands, which is unfortunate, since it
makes debugging Makefiles tough (especially when looking at CI build
logs). Going forward, we should only use @ for meaningless commands,
like @echo.
* The AWS project wasn't actually running tslint, because it needs to
say `tslint './pack/**/*.ts' --exclude='./pack/node_modules/**'`.
The current script of `tslint lib/aws/pack/...` wasn't actually
running lint, hence we missed a lot of AWS lint issues.
* Fix up the issues that these fixes uncovered. Mostly err shadowing.
This continues the previous commit and establishes the interpreter
context so that we can use the new host interface. In summary:
* Instead of using the NullSource for destructions -- which
doesn't hook up an interpreter and so any reads of configuration
variables will fail -- we will enlighten the EvalSource to know
how to orchestrate destruction interpretation. The primary
difference is that we don't actually run the code, but *we do*
perform all of the necessary configuration and variable init.
* Associate the active interpreter with the plugin context as
we are executing, so that the host object can actually read the
state from the heap as requested to do so by attached plugins.
* Rename anything "engine" related to use the term "host"; this
avoids introducing unnecesarily new terminology.
* Add a new pkg/resource/provider/ package where we can begin
consolidating helper functionality for resource providers.
Right now, this includes a wrapper interface atop the gRPC
machinery necessary to contact the host, in addition to a
Main function that hides some boilerplate entrypoint code.
* Add a rpcutil.IsBenignCloseErr routine to let us ignore
"benign" gRPC errors that are knowingly returned at shutdown.
This commit completes pulumi/lumi#117.
This change adds an engine gRPC interface, and associated implementation,
so that plugins may do interesting things that require "phoning home".
Previously, the engine would fire up plugins and talk to them directly,
but there was no way for a plugin to ask the engine to do anything.
The motivation here is so that plugins can read evaluator state, such
as config information, but this change also allows richer logging
functionality than previously possible. We will still auto-log any
stdout/stderr writes; however, explicit errors, warnings, informational,
and even debug messages may be written over the Log API.
This change simplifies the generated Check interface for providers.
Instead of
Check(ctx context.Context, obj *T) ([]error, error)
where T is the resource type, we have
Check(ctx context.Context, obj *T, property string) error
This is done so that we can drive the calls to Check one property
at a time, allowing us to skip any that are computed. (Otherwise,
we may fail the verification erroneously.)
This has the added advantage that the Check implementations are
simpler and can simply return a single error. Furthermore, the
generated RPC code handles wrapping the result, so we can just do
return errors.New("bad");
rather than the previous reflection-laden junk
return resource.NewFieldError(
reflect.TypeOf(obj), awsservice.AWSResource_Property,
errors.New("bad"))
On the first turn, we want to distinguish between a coroutine
running that owns its turn, and a coroutine that knows it doesn't
own the turn and is simply awaiting its turn. The old Meet logic
wasn't quite right; instead, we'll have the caller tell us this.
* Assert new things in new places.
* Log more interesting tidbits during evaluation.
* Invoke the OnStart hook before triggering initializers.
* Tolerate nil prev snapshots during deletion calculation.
* Handle and serialize missing resource IDs as output props.
* Return "done" flag from Rendezvous.Meet.
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
This change guts the deployment planning and execution process, a
necessary component of pulumi/lumi#90.
The major effect of this change is that resources are actually
connected to the live objects, instead of being snapshots taken at
inopportune moments in time.
This change implements `mapper.Encode` "for real" (that is, in a way
that isn't a complete embarrassment). It uses the obvious reflection
trickery to encode a tagged struct and its values as a JSON-like
in-memory map and collection of keyed values.
During this, I took the opportunity to also clean up a few other things
that had been bugging me. Namely, the presence of `mapper.Object` was
always error prone, since it isn't a true "typedef" in the sence that
it carries extra RTTI. Instead of doing that, let's just use the real
`map[string]interface{}` "JSON-map-like" object type. Even better, we
no longer require resource providers to deal with the mapper
infrastructure. Instead, the `Check` function can simply return an
array of errors. It's still best practice to return field-specific errors
to facilitate better diagnostics, but it's no longer required; and I've
added `resource.NewFieldError` to eliminate the need to import mapper.
As of this change, we can also consistently emit RPC structs with `lumi`
tags, rather than `lumi` tags on the way in and `json` on the way out.
This completes pulumi/lumi#183.
There are a few things that annoyed me about the way our CLI works with
directories when loading packages. For example, `lumi pack info some/pack/dir/`
never worked correctly. This is unfortunate when scripting commands.
This change fixes the workspace detection logic to handle these cases.
This change enables parallelism for our tests.
It also introdues a `test_core` Makefile target to just run the
core engine tests, and not the providers, since they take a long time.
This is intended only as part of the inner developer loop.
This change makes progress on a few things with respect to properly
receiving properties on the engine side, coming from the provider side,
of the RPC boundary. The issues here are twofold:
1. Properties need to get unmapped using a JSON-tag-sensitive
marshaler, so that they are cased properly, etc. For that, we
have a new mapper.Unmap function (which is ultra lame -- see
pulumi/lumi#138).
2. We have the reverse problem with respect to resource IDs: on
the send side, we must translate from URNs (which the engine
knows about) and provider IDs (which the provider knows about);
similarly, then, on the receive side, we must translate from
provider IDs back into URNs.
As a result of these getting fixed, we can now properly marshal the
resulting properties back into the resource object during the plan
execution, alongside propagating and memoizing its ID.
The change to flow logging to plugins is nice, however, it can be
annoying because all writes to stderr are interepreted on the Lumi
side as errors. After this change, we will only flow if
--logflow is passed, e.g. as in
$ lumi --logtostderr --logflow -v=9 deploy ...
This change modifies the existing resource provider RPC interface slightly.
Instead of the Create API returning the bag of output properties, we will
rely on the Get API to do so. As a result, this change takes an initial
whack at implementing Get on all existing AWS resources. The Get API needs
to return a fully populated structure containing all inputs and outputs.
Believe it or not, this is actually part of pulumi/lumi#90.
This was done because just returning output properties is insufficient.
Any input properties that weren't supplied may have default values, for
example, and it is wholly reasonable to expect Lumi scripts to depend on
those values in addition to output values.
This isn't fully functional in its current form, because doing this
change turned up many other related changes required to enable output
properties. For instance, at the moment resource properties are defined
in terms of `resource.URN`s, and yet unfortunately the provider side
knows nothing of URNs (instead preferring to deal in `resource.ID`s).
I am going to handle that in a subsequent isolated change, since it will
have far-reaching implications beyond just modifying create and get.
This change flows --logtostderr and -v=x settings to any dynamically
loaded plugins so that running Lumi's command line with these flags
will also result in the plugins logging at the requested levels. I've
found this handy for debugging purposes.
This changes the object mapper infrastructure to offer more fine-grained
reporting of errors, and control over verification, during the mapping from
an untyped payload to a typed one. As a result, we can eliminate a bit of
the explicit unmarshaling goo in the AWS providers (but not all of it; I'm
sure there is more we can, and should, be doing here...)
This change recognizes TextUnmarshaler during object mapping, and
will defer to it when we have a string but are assigning to a
non-string target that implements the interface.
* Print a pretty message if the plan has nothing to do:
"info: nothing to do -- resources are up to date"
* Add an extra validation step after reading in a snapshot,
so that we detect more errors sooner. For example, I've
fed in the wrong file several times, and it just chugs
along as though it were actually a snapshot.
* Skip printing nulls in most plan outputs. These just
clutter up the output.
This change introduces a simple retry package which will be used in
resource providers in order to perform waits, backoffs, etc. It's
pretty basic right now but leverages the fancy new Golang context
support for deadlines and timeouts. Soon we will layer AWS-specific
acceptors, etc. atop this more general purpose thing.
This change introduces a new informational message category to the
overall diagnostics infrastructure, and then wires up the resource
provider plugins stdout/stderr streams to it. In particular, a
write to stdout implies an informational message, whereas a write to
stderr implies an error. This is just a very simple and convenient
way for plugins to provide progress reporting; eventually we may
need something more complex, due to parallel evaluation of resource
graphs, however I hope we don't have to deviate too much from this.
This change redoes the way module exports are represented. The old
mechanism -- although laudible for its attempt at consistency -- was
wrong. For example, consider this case:
let v = 42;
export { v };
The old code would silently add *two* members, both with the name "v",
one of which would be dropped since the entries in the map collided.
It would be easy enough just to detect collisions, and update the
above to mark "v" as public, when the export was encountered. That
doesn't work either, as the following two examples demonstrate:
let v = 42;
export { v as w };
let x = w; // error!
This demonstrates:
* Exporting "v" with a different name, "w" to consumers of the
module. In particular, it should not be possible for module
consumers to access the member through the name "v".
* An inability to access the exported name "w" from within the
module itself. This is solely for external consumption.
Because of this, we will use an export table approach. The exports
live alongside the members, and we are smart about when to consult
the export table, versus the member table, during name binding.
Glog doesn't actually print out the stack traces for all goroutines,
when --logtostderr is enabled, the same way it normally does. This
makes debugging more complex in some cases. So, we'll manually do it.
This change revives some compiler tests that are still lingering around
from the old architecture, before our latest round of ship burning.
It also fixes up some bugs uncovered during this:
* Don't claim that a symbol's kind is incorrect in the binder error
message when it wasn't found. Instead, say that it was missing.
* Do not attempt to compile if an error was issued during workspace
resolution and/or loading of the Mufile. This leads to trying to
load an empty path and badness quickly ensues (crash).
* Issue an error if the Mufile wasn't found (this got lost apparently).
* Rename the ErrorMissingPackageName message to ErrorInvalidPackageName,
since missing names are now caught by our new fancy decoder that
understands required versus optional fields. We still need to guard
against illegal characters in the name, including the empty string "".
* During decoding, reject !src.IsValid elements. This represents the
zero value and should be treated equivalently to a missing field.
* Do not permit empty strings "" as Names or QNames. The old logic
accidentally permitted them because regexp.FindString("") == "", no
matter the regex!
* Move the TestDiagSink abstraction to a new pkg/util/testutil package,
allowing us to share this common code across multiple package tests.
* Fix up a few messages that needed tidying or to use Infof vs. Info.
The binder tests -- deleted in this -- are about to come back, however,
I am splitting up the changes, since this represents a passing fixed point.
This change further merges the new AST and MuPack/MuIL formats and
abstractions into the core of the compiler. A good amount of the old
code is gone now; I decided against ripping it all out in one fell
swoop so that I can methodically check that we are preserving all
relevant decisions and/or functionality we had in the old model.
The changes are too numerous to outline in this commit message,
however, here are the noteworthy ones:
* Split up the notion of symbols and tokens, resulting in:
- pkg/symbols for true compiler symbols (bound nodes)
- pkg/tokens for name-based tokens, identifiers, constants
* Several packages move underneath pkg/compiler:
- pkg/ast becomes pkg/compiler/ast
- pkg/errors becomes pkg/compiler/errors
- pkg/symbols becomes pkg/compiler/symbols
* pkg/ast/... becomes pkg/compiler/legacy/ast/...
* pkg/pack/ast becomes pkg/compiler/ast.
* pkg/options goes away, merged back into pkg/compiler.
* All binding functionality moves underneath a dedicated
package, pkg/compiler/binder. The legacy.go file contains
cruft that will eventually go away, while the other files
represent a halfway point between new and old, but are
expected to stay roughly in the current shape.
* All parsing functionality is moved underneath a new
pkg/compiler/metadata namespace, and we adopt new terminology
"metadata reading" since real parsing happens in the MetaMu
compilers. Hence, Parser has become metadata.Reader.
* In general phases of the compiler no longer share access to
the actual compiler.Compiler object. Instead, shared state is
moved to the core.Context object underneath pkg/compiler/core.
* Dependency resolution during binding has been rewritten to
the new model, including stashing bound package symbols in the
context object, and detecting import cycles.
* Compiler construction does not take a workspace object. Instead,
creation of a workspace is entirely hidden inside of the compiler's
constructor logic.
* There are three Compile* functions on the Compiler interface, to
support different styles of invoking compilation: Compile() auto-
detects a Mu package, based on the workspace; CompilePath(string)
loads the target as a Mu package and compiles it, regardless of
the workspace settings; and, CompilePackage(*pack.Package) will
compile a pre-loaded package AST, again regardless of workspace.
* Delete the _fe, _sema, and parsetree phases. They are no longer
relevant and the functionality is largely subsumed by the above.
...and so very much more. I'm surprised I ever got this to compile again!
This is the first change of many to merge the MuPack/MuIL formats
into the heart of the "compiler".
In fact, the entire meaning of the compiler has changed, from
something that took metadata and produced CloudFormation, into
something that takes MuPack/MuIL as input, and produces a MuGL
graph as output. Although this process is distinctly different,
there are several aspects we can reuse, like workspace management,
dependency resolution, and some amount of name binding and symbol
resolution, just as a few examples.
An overview of the compilation process is available as a comment
inside of the compiler.Compile function, although it is currently
unimplemented.
The relationship between Workspace and Compiler has been semi-
inverted, such that all Compiler instances require a Workspace
object. This is more natural anyway and moves some of the detection
logic "outside" of the Compiler. Similarly, Options has moved to
a top-level package, so that Workspace and Compiler may share
access to it without causing package import cycles.
Finally, all that templating crap is gone. This alone is cause
for mass celebration!
This change overhauls the approach to custom decoding. Instead of decoding
the parts of the struct that are "trivial" in one pass, and then patching up
the structure afterwards with custom decoding, the decoder itself understands
the notion of custom decoder functions.
First, the general purpose logic has moved out of pkg/pack/encoding and into
a new package, pkg/util/mapper. Most functions are now members of a new top-
level type, Mapper, which may be initialized with custom decoders. This
is a map from target type to a function that can decode objects into it.
Second, the AST-specific decoding logic is rewritten to use it. All AST nodes
are now supported, including definitions, statements, and expressions. The
overall approach here is to simply define a custom decoder for any interface
type that will occur in a node field position. The mapper, upon encountering
such a type, will consult the custom decoder map; if a decoder is found, it
will be used, otherwise an error results. This decoder then needs to switch
on the type discriminated kind field that is present in the metadata, creating
a concrete struct of the right type, and then converting it to the desired
interface type. Note that, subtly, interface types used only for "marker"
purposes don't require any custom decoding, because they do not appear in
field positions and therefore won't be encountered during the decoding process.
This change just moves the assertion/failure functions from the pkg/util
package to pkg/util/contract, so things read a bit nicer (i.e.,
`contract.Assert(x)` versus `util.Assert(x)`).
Well, it turns out glog.Fail is slightly better than panic, because it explicitly
dumps the stacks of *all* goroutines. This is especially good in logging scenarios.
It's really annoying that glog suppresses printing the stack trace (see here
https://github.com/golang/glog/blob/master/glog.go#L719), however this is better.
This change switches away from using glog.Fatalf, and instead uses panic,
should a fail-fast arise (due to a call to util.Fail or a failed assertion
by way of util.Assert). This leads to a better debugging experience no
matter what flags have been passed to glog. For example, glog.Fatal* seems
to suppress printing stack traces when --logtostderr is supplied.
This change mostly replaces explicit if/then/glog.Fatalf calls with
util.Assert calls. In addition, it adds a companion util.Fail family
of methods that does the same thing as a failed assertion, except that
it is unconditional.
This introduces three assertion functions:
* util.Assert: simply asserts a boolean condition, ripping the process using
glog should it fail, with a stock message.
* util.AssertM: asserts a boolean condition, also using glog if it fails,
but it comes with a message to help with debugging too.
* util.AssertMF: the same, except it formats the message with arguments.