pulumi/pkg/resource/plugin/plugin.go

274 lines
8.2 KiB
Go
Raw Normal View History

2018-05-22 21:43:36 +02:00
// Copyright 2016-2018, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
Begin resource modeling and planning This change introduces a new package, pkg/resource, that will form the foundation for actually performing deployment plans and applications. It contains the following key abstractions: * resource.Provider is a wrapper around the CRUD operations exposed by underlying resource plugins. It will eventually defer to resource.Plugin, which itself defers -- over an RPC interface -- to the actual plugin, one per package exposing resources. The provider will also understand how to load, cache, and overall manage the lifetime of each plugin. * resource.Resource is the actual resource object. This is created from the overall evaluation object graph, but is simplified. It contains only serializable properties, for example. Inter-resource references are translated into serializable monikers as part of creating the resource. * resource.Moniker is a serializable string that uniquely identifies a resource in the Mu system. This is in contrast to resource IDs, which are generated by resource providers and generally opaque to the Mu system. See marapongo/mu#69 for more information about monikers and some of their challenges (namely, designing a stable algorithm). * resource.Snapshot is a "snapshot" taken from a graph of resources. This is a transitive closure of state representing one possible configuration of a given environment. This is what plans are created from. Eventually, two snapshots will be diffable, in order to perform incremental updates. One way of thinking about this is that a snapshot of the old world's state is advanced, one step at a time, until it reaches a desired snapshot of the new world's state. * resource.Plan is a plan for carrying out desired CRUD operations on a target environment. Each plan consists of zero-to-many Steps, each of which has a CRUD operation type, a resource target, and a next step. This is an enumerator because it is possible the plan will evolve -- and introduce new steps -- as it is carried out (hence, the Next() method). At the moment, this is linearized; eventually, we want to make this more "graph-like" so that we can exploit available parallelism within the dependencies. There are tons of TODOs remaining. However, the `mu plan` command is functioning with these new changes -- including colorization FTW -- so I'm landing it now. This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 21:31:48 +01:00
package plugin
Begin resource modeling and planning This change introduces a new package, pkg/resource, that will form the foundation for actually performing deployment plans and applications. It contains the following key abstractions: * resource.Provider is a wrapper around the CRUD operations exposed by underlying resource plugins. It will eventually defer to resource.Plugin, which itself defers -- over an RPC interface -- to the actual plugin, one per package exposing resources. The provider will also understand how to load, cache, and overall manage the lifetime of each plugin. * resource.Resource is the actual resource object. This is created from the overall evaluation object graph, but is simplified. It contains only serializable properties, for example. Inter-resource references are translated into serializable monikers as part of creating the resource. * resource.Moniker is a serializable string that uniquely identifies a resource in the Mu system. This is in contrast to resource IDs, which are generated by resource providers and generally opaque to the Mu system. See marapongo/mu#69 for more information about monikers and some of their challenges (namely, designing a stable algorithm). * resource.Snapshot is a "snapshot" taken from a graph of resources. This is a transitive closure of state representing one possible configuration of a given environment. This is what plans are created from. Eventually, two snapshots will be diffable, in order to perform incremental updates. One way of thinking about this is that a snapshot of the old world's state is advanced, one step at a time, until it reaches a desired snapshot of the new world's state. * resource.Plan is a plan for carrying out desired CRUD operations on a target environment. Each plan consists of zero-to-many Steps, each of which has a CRUD operation type, a resource target, and a next step. This is an enumerator because it is possible the plan will evolve -- and introduce new steps -- as it is carried out (hence, the Next() method). At the moment, this is linearized; eventually, we want to make this more "graph-like" so that we can exploit available parallelism within the dependencies. There are tons of TODOs remaining. However, the `mu plan` command is functioning with these new changes -- including colorization FTW -- so I'm landing it now. This is part of marapongo/mu#38 and marapongo/mu#41.
2017-02-17 21:31:48 +01:00
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
import (
2017-02-20 21:34:15 +01:00
"bufio"
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
"io"
"os"
"os/exec"
"strconv"
"strings"
"sync/atomic"
"time"
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
multierror "github.com/hashicorp/go-multierror"
"github.com/pkg/errors"
"golang.org/x/net/context"
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/status"
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
"github.com/pulumi/pulumi/pkg/diag"
"github.com/pulumi/pulumi/pkg/util/cmdutil"
"github.com/pulumi/pulumi/pkg/util/contract"
"github.com/pulumi/pulumi/pkg/util/logging"
"github.com/pulumi/pulumi/pkg/util/rpcutil"
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
)
type plugin struct {
stdoutDone <-chan bool
stderrDone <-chan bool
Bin string
Args []string
Conn *grpc.ClientConn
Proc *os.Process
Stdin io.WriteCloser
Stdout io.ReadCloser
Stderr io.ReadCloser
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
// pluginRPCConnectionTimeout dictates how long we wait for the plugin's RPC to become available.
var pluginRPCConnectionTimeout = time.Second * 10
// A unique ID provided to the output stream of each plugin. This allows the output of the plugin
// to be streamed to the display, while still allowing that output to be sent a small piece at a
// time.
var nextStreamID int32
func newPlugin(ctx *Context, bin string, prefix string, args []string) (*plugin, error) {
if logging.V(9) {
var argstr string
for i, arg := range args {
if i > 0 {
argstr += ","
}
argstr += arg
}
logging.V(9).Infof("Launching plugin '%v' from '%v' with args: %v", prefix, bin, argstr)
}
// Try to execute the binary.
plug, err := execPlugin(bin, args, ctx.Pwd)
if err != nil {
return nil, errors.Wrapf(err, "failed to load plugin %s", bin)
}
contract.Assert(plug != nil)
// If we did not successfully launch the plugin, we still need to wait for stderr and stdout to drain.
defer func() {
if plug.Conn == nil {
contract.IgnoreError(plug.Close())
}
}()
outStreamID := atomic.AddInt32(&nextStreamID, 1)
errStreamID := atomic.AddInt32(&nextStreamID, 1)
// For now, we will spawn goroutines that will spew STDOUT/STDERR to the relevant diag streams.
runtrace := func(t io.Reader, stderr bool, done chan<- bool) {
reader := bufio.NewReader(t)
for {
msg, readerr := reader.ReadString('\n')
if readerr != nil {
break
}
if strings.TrimSpace(msg) != "" {
if stderr {
ctx.Diag.Infoerrf(diag.StreamMessage("" /*urn*/, msg, errStreamID))
} else {
ctx.Diag.Infof(diag.StreamMessage("" /*urn*/, msg, outStreamID))
}
Improve output formatting This change improves our output formatting by generally adding fewer prefixes. As shown in pulumi/pulumi#359, we were being excessively verbose in many places, including prefixing every console.out with "langhost[nodejs].stdout: ", displaying full stack traces for simple errors like missing configuration, etc. Overall, this change includes the following: * Don't prefix stdout and stderr output from the program, other than the standard "info:" prefix. I experimented with various schemes here, but they all felt gratuitous. Simply emitting the output seems fine, especially as it's closer to what would happen if you just ran the program under node. * Do NOT make writes to stderr fail the plan/deploy. Previously we assumed that any console.errors, for instance, meant that the overall program should fail. This simply isn't how stderr is treated generally and meant you couldn't use certain logging techniques and libraries, among other things. * Do make sure that stderr writes in the program end up going to stderr in the Pulumi CLI output, however, so that redirection works as it should. This required a new Infoerr log level. * Make a small fix to the planning logic so we don't attempt to print the summary if an error occurs. * Finally, add a new error type, RunError, that when thrown and uncaught does not result in a full stack trace being printed. Anyone can use this, however, we currently use it for config errors so that we can terminate with a pretty error message, rather than the monstrosity shown in pulumi/pulumi#359.
2017-09-23 14:20:11 +02:00
}
}
close(done)
}
// Set up a tracer on stderr before going any further, since important errors might get communicated this way.
stderrDone := make(chan bool)
plug.stderrDone = stderrDone
go runtrace(plug.Stderr, true, stderrDone)
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
// Now that we have a process, we expect it to write a single line to STDOUT: the port it's listening on. We only
// read a byte at a time so that STDOUT contains everything after the first newline.
var port string
b := make([]byte, 1)
for {
2017-06-09 21:51:31 +02:00
n, readerr := plug.Stdout.Read(b)
if readerr != nil {
killerr := plug.Proc.Kill()
contract.IgnoreError(killerr) // we are ignoring because the readerr trumps it.
if port == "" {
return nil, errors.Wrapf(readerr, "could not read plugin [%v] stdout", bin)
}
return nil, errors.Wrapf(readerr, "failure reading plugin [%v] stdout (read '%v')", bin, port)
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
if n > 0 && b[0] == '\n' {
break
}
port += string(b[:n])
}
// Parse the output line (minus the '\n') to ensure it's a numeric port.
if _, err = strconv.Atoi(port); err != nil {
killerr := plug.Proc.Kill()
contract.IgnoreError(killerr) // ignoring the error because the existing one trumps it.
return nil, errors.Wrapf(
err, "%v plugin [%v] wrote a non-numeric port to stdout ('%v')", prefix, bin, port)
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
// After reading the port number, set up a tracer on stdout just so other output doesn't disappear.
stdoutDone := make(chan bool)
plug.stdoutDone = stdoutDone
go runtrace(plug.Stdout, false, stdoutDone)
2017-02-20 21:34:15 +01:00
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
// Now that we have the port, go ahead and create a gRPC client connection to it.
conn, err := grpc.Dial("127.0.0.1:"+port, grpc.WithInsecure(), grpc.WithUnaryInterceptor(
rpcutil.OpenTracingClientInterceptor(),
))
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
if err != nil {
return nil, errors.Wrapf(err, "could not dial plugin [%v] over RPC", bin)
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
// Now wait for the gRPC connection to the plugin to become ready.
// TODO[pulumi/pulumi#337]: in theory, this should be unnecessary. gRPC's default WaitForReady behavior
// should auto-retry appropriately. On Linux, however, we are observing different behavior. In the meantime
// while this bug exists, we'll simply do a bit of waiting of our own up front.
timeout, _ := context.WithTimeout(context.Background(), pluginRPCConnectionTimeout)
for {
s := conn.GetState()
if s == connectivity.Ready {
// The connection is supposedly ready; but we will make sure it is *actually* ready by sending a dummy
// method invocation to the server. Until it responds successfully, we can't safely proceed.
outer:
for {
err = grpc.Invoke(timeout, "", nil, nil, conn)
if err == nil {
break // successful connect
} else {
// We have an error; see if it's a known status and, if so, react appropriately.
status, ok := status.FromError(err)
if ok {
switch status.Code() {
case codes.Unavailable:
// The server is unavailable. This is the Linux bug. Wait a little and retry.
time.Sleep(time.Millisecond * 10)
continue // keep retrying
case codes.Unimplemented, codes.ResourceExhausted:
// Since we sent "" as the method above, this is the expected response. Ready to go.
break outer
}
}
// Unexpected error; get outta dodge.
return nil, errors.Wrapf(err, "%v plugin [%v] did not come alive", prefix, bin)
}
}
break
}
// Not ready yet; ask the gRPC client APIs to block until the state transitions again so we can retry.
if !conn.WaitForStateChange(timeout, s) {
return nil, errors.Errorf("%v plugin [%v] did not begin responding to RPC connections", prefix, bin)
}
}
// Done; store the connection and return the plugin info.
plug.Conn = conn
return plug, nil
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
func execPlugin(bin string, pluginArgs []string, pwd string) (*plugin, error) {
var args []string
// Flow the logging information if set.
if logging.LogFlow {
if logging.LogToStderr {
args = append(args, "-logtostderr")
}
if logging.Verbose > 0 {
args = append(args, "-v="+strconv.Itoa(logging.Verbose))
}
}
// Always flow tracing settings.
if cmdutil.TracingEndpoint != "" {
args = append(args, "--tracing", cmdutil.TracingEndpoint)
}
args = append(args, pluginArgs...)
cmd := exec.Command(bin, args...)
cmdutil.RegisterProcessGroup(cmd)
cmd.Dir = pwd
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
in, _ := cmd.StdinPipe()
out, _ := cmd.StdoutPipe()
err, _ := cmd.StderrPipe()
if err := cmd.Start(); err != nil {
return nil, err
}
return &plugin{
Bin: bin,
Args: args,
Proc: cmd.Process,
Stdin: in,
Stdout: out,
Stderr: err,
}, nil
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}
func (p *plugin) Close() error {
if p.Conn != nil {
closerr := p.Conn.Close()
contract.IgnoreError(closerr)
}
2017-10-31 01:35:36 +01:00
var result error
// On each platform, plugins are not loaded directly, instead a shell launches each plugin as a child process, so
// instead we need to kill all the children of the PID we have recorded, as well. Otherwise we will block waiting
// for the child processes to close.
if err := cmdutil.KillChildren(p.Proc.Pid); err != nil {
result = multierror.Append(result, err)
}
// IDEA: consider a more graceful termination than just SIGKILL.
if err := p.Proc.Kill(); err != nil {
result = multierror.Append(result, err)
}
// Wait for stdout and stderr to drain.
if p.stdoutDone != nil {
<-p.stdoutDone
}
if p.stderrDone != nil {
<-p.stderrDone
}
2017-10-31 01:35:36 +01:00
return result
Implement resource provider plugins This change adds basic support for discovering, loading, binding to, and invoking RPC methods on, resource provider plugins. In a nutshell, we add a new context object that will share cached state such as loaded plugins and connections to them. It will be a policy decision in server scenarios how much state to share and between whom. This context also controls per-resource context allocation, which in the future will allow us to perform structured cancellation and teardown amongst entire groups of requests. Plugins are loaded based on their name, and can be found in one of two ways: either simply by having them on your path (with a name of "mu-ressrv-<pkg>", where "<pkg>" is the resource package name with any "/"s replaced with "_"s); or by placing them in the standard library installation location, which need not be on the path for this to work (since we know precisely where to look). If we find a protocol, we will load it as a child process. The protocol for plugins is that they will choose a port on their own -- to eliminate races that'd be involved should Mu attempt to pre-pick one for them -- and then write that out as the first line to STDOUT (terminated by a "\n"). This is the only STDERR/STDOUT that Mu cares about; from there, the plugin is free to write all it pleases (e.g., for logging, debugging purposes, etc). Afterwards, we then bind our gRPC connection to that port, and create a typed resource provider client. The CRUD operations that get driven by plan application are then simple wrappers atop the underlying gRPC calls. For now, we interpret all errors as catastrophic; in the near future, we will probably want to introduce a "structured error" mechanism in the gRPC interface for "transactional errors"; that is, errors for which the server was able to recover to a safe checkpoint, which can be interpreted as ResourceOK rather than ResourceUnknown.
2017-02-19 20:08:06 +01:00
}