2018-05-22 21:43:36 +02:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
2017-08-30 03:24:12 +02:00
|
|
|
|
2017-08-23 01:56:15 +02:00
|
|
|
package engine
|
|
|
|
|
|
|
|
import (
|
2018-08-07 01:46:17 +02:00
|
|
|
"context"
|
|
|
|
"sync"
|
2017-08-23 01:56:15 +02:00
|
|
|
|
2018-02-19 19:58:03 +01:00
|
|
|
"github.com/opentracing/opentracing-go"
|
|
|
|
"github.com/pulumi/pulumi/pkg/diag"
|
2017-09-22 04:18:21 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource"
|
|
|
|
"github.com/pulumi/pulumi/pkg/resource/deploy"
|
2018-08-09 23:45:39 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource/deploy/providers"
|
2017-09-22 04:18:21 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource/plugin"
|
|
|
|
"github.com/pulumi/pulumi/pkg/tokens"
|
|
|
|
"github.com/pulumi/pulumi/pkg/util/contract"
|
2019-04-30 19:37:17 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/fsutil"
|
2019-03-20 00:21:50 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/result"
|
2018-03-13 00:27:39 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/workspace"
|
2017-08-23 01:56:15 +02:00
|
|
|
)
|
|
|
|
|
2018-02-19 19:58:03 +01:00
|
|
|
// ProjectInfoContext returns information about the current project, including its pwd, main, and plugin context.
|
2019-04-23 18:53:44 +02:00
|
|
|
func ProjectInfoContext(projinfo *Projinfo, host plugin.Host, config plugin.ConfigSource,
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 22:12:40 +02:00
|
|
|
diag, statusDiag diag.Sink, tracingSpan opentracing.Span) (string, string, *plugin.Context, error) {
|
2018-02-19 19:58:03 +01:00
|
|
|
contract.Require(projinfo != nil, "projinfo")
|
|
|
|
|
|
|
|
// If the package contains an override for the main entrypoint, use it.
|
|
|
|
pwd, main, err := projinfo.GetPwdMain()
|
|
|
|
if err != nil {
|
|
|
|
return "", "", nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create a context for plugins.
|
2019-04-23 18:53:44 +02:00
|
|
|
ctx, err := plugin.NewContext(diag, statusDiag, host, config, pwd,
|
2018-11-01 16:28:11 +01:00
|
|
|
projinfo.Proj.Runtime.Options(), tracingSpan)
|
2018-02-19 19:58:03 +01:00
|
|
|
if err != nil {
|
|
|
|
return "", "", nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return pwd, main, ctx, nil
|
|
|
|
}
|
|
|
|
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
// newPlanContext creates a context for a subsequent planning operation. Callers must call Close on the
|
|
|
|
// resulting context object once they have completed the associated planning operation.
|
2018-05-05 02:01:35 +02:00
|
|
|
func newPlanContext(u UpdateInfo, opName string, parentSpan opentracing.SpanContext) (*planContext, error) {
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
contract.Require(u != nil, "u")
|
|
|
|
|
|
|
|
// Create a root span for the operation
|
2018-05-05 02:01:35 +02:00
|
|
|
opts := []opentracing.StartSpanOption{}
|
|
|
|
if opName != "" {
|
|
|
|
opts = append(opts, opentracing.Tag{Key: "operation", Value: opName})
|
|
|
|
}
|
|
|
|
if parentSpan != nil {
|
|
|
|
opts = append(opts, opentracing.ChildOf(parentSpan))
|
|
|
|
}
|
|
|
|
tracingSpan := opentracing.StartSpan("pulumi-plan", opts...)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
|
|
|
|
return &planContext{
|
2018-04-17 08:04:56 +02:00
|
|
|
Update: u,
|
|
|
|
TracingSpan: tracingSpan,
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
type planContext struct {
|
2018-04-17 08:04:56 +02:00
|
|
|
Update UpdateInfo // The update being processed.
|
|
|
|
TracingSpan opentracing.Span // An OpenTracing span to parent plan operations within.
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
func (ctx *planContext) Close() {
|
|
|
|
ctx.TracingSpan.Finish()
|
|
|
|
}
|
|
|
|
|
|
|
|
// planOptions includes a full suite of options for performing a plan and/or deploy operation.
|
|
|
|
type planOptions struct {
|
|
|
|
UpdateOptions
|
|
|
|
|
|
|
|
// SourceFunc is a factory that returns an EvalSource to use during planning. This is the thing that
|
|
|
|
// creates resources to compare against the current checkpoint state (e.g., by evaluating a program, etc).
|
|
|
|
SourceFunc planSourceFunc
|
|
|
|
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 22:12:40 +02:00
|
|
|
DOT bool // true if we should print the DOT file for this plan.
|
|
|
|
Events eventEmitter // the channel to write events from the engine to.
|
|
|
|
Diag diag.Sink // the sink to use for diag'ing.
|
|
|
|
StatusDiag diag.Sink // the sink to use for diag'ing status messages.
|
2018-08-23 02:52:46 +02:00
|
|
|
|
|
|
|
// true if we're planning a refresh.
|
|
|
|
isRefresh bool
|
2018-09-28 00:49:08 +02:00
|
|
|
|
|
|
|
// true if we should trust the dependency graph reported by the language host. Not all Pulumi-supported languages
|
|
|
|
// correctly report their dependencies, in which case this will be false.
|
|
|
|
trustDependencies bool
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// planSourceFunc is a callback that will be used to prepare for, and evaluate, the "new" state for a stack.
|
2018-04-14 07:26:01 +02:00
|
|
|
type planSourceFunc func(
|
2019-03-15 23:01:37 +01:00
|
|
|
client deploy.BackendClient, opts planOptions, proj *workspace.Project, pwd, main string,
|
2018-04-14 07:26:01 +02:00
|
|
|
target *deploy.Target, plugctx *plugin.Context, dryRun bool) (deploy.Source, error)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
|
2017-08-23 01:56:15 +02:00
|
|
|
// plan just uses the standard logic to parse arguments, options, and to create a snapshot and plan.
|
2018-05-18 23:58:06 +02:00
|
|
|
func plan(ctx *Context, info *planContext, opts planOptions, dryRun bool) (*planResult, error) {
|
|
|
|
contract.Assert(info != nil)
|
|
|
|
contract.Assert(info.Update != nil)
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
contract.Assert(opts.SourceFunc != nil)
|
2017-08-23 01:56:15 +02:00
|
|
|
|
2018-01-08 22:01:40 +01:00
|
|
|
// First, load the package metadata and the deployment target in preparation for executing the package's program
|
2018-02-19 19:58:03 +01:00
|
|
|
// and creating resources. This includes fetching its pwd and main overrides.
|
2018-05-18 23:58:06 +02:00
|
|
|
proj, target := info.Update.GetProject(), info.Update.GetTarget()
|
2018-02-14 22:56:16 +01:00
|
|
|
contract.Assert(proj != nil)
|
2018-01-08 22:01:40 +01:00
|
|
|
contract.Assert(target != nil)
|
2018-05-18 23:58:06 +02:00
|
|
|
projinfo := &Projinfo{Proj: proj, Root: info.Update.GetRoot()}
|
2019-04-23 18:53:44 +02:00
|
|
|
pwd, main, plugctx, err := ProjectInfoContext(projinfo, opts.host, target,
|
Implement status sinks
This commit reverts most of #1853 and replaces it with functionally
identical logic, using the notion of status message-specific sinks.
In other words, where the original commit implemented ephemeral status
messages by adding an `isStatus` parameter to most of the logging
methdos in pulumi/pulumi, this implements ephemeral status messages as a
parallel logging sink, which emits _only_ ephemeral status messages.
The original commit message in that PR was:
> Allow log events to be marked "status" events
>
> This commit will introduce a field, IsStatus to LogRequest. A "status"
> logging event will be displayed in the Info column of the main
> display, but will not be printed out at the end, when resource
> operations complete.
>
> For example, for complex resource initialization, we'd like to display
> a series of intermediate results: [1/4] Service object created, for
> example. We'd like these to appear in the Info column, but not at the
> end, where they are not helpful to the user.
2018-08-31 22:12:40 +02:00
|
|
|
opts.Diag, opts.StatusDiag, info.TracingSpan)
|
2017-12-12 21:31:09 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-09-28 00:49:08 +02:00
|
|
|
opts.trustDependencies = proj.TrustResourceDependencies()
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
// Now create the state source. This may issue an error if it can't create the source. This entails,
|
|
|
|
// for example, loading any plugins which will be required to execute a program, among other things.
|
2019-03-15 23:01:37 +01:00
|
|
|
source, err := opts.SourceFunc(ctx.BackendClient, opts, proj, pwd, main, target, plugctx, dryRun)
|
2018-02-06 18:57:32 +01:00
|
|
|
if err != nil {
|
2018-12-18 22:25:52 +01:00
|
|
|
contract.IgnoreClose(plugctx)
|
2018-02-06 18:57:32 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2017-08-23 01:56:15 +02:00
|
|
|
// If there are any analyzers in the project file, add them.
|
|
|
|
var analyzers []tokens.QName
|
2018-02-14 22:56:16 +01:00
|
|
|
if as := projinfo.Proj.Analyzers; as != nil {
|
2017-08-23 01:56:15 +02:00
|
|
|
for _, a := range *as {
|
|
|
|
analyzers = append(analyzers, a)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Append any analyzers from the command line.
|
|
|
|
for _, a := range opts.Analyzers {
|
|
|
|
analyzers = append(analyzers, tokens.QName(a))
|
|
|
|
}
|
|
|
|
|
|
|
|
// Generate a plan; this API handles all interesting cases (create, update, delete).
|
2018-11-14 22:33:35 +01:00
|
|
|
plan, err := deploy.NewPlan(plugctx, target, target.Snapshot, source, analyzers, dryRun, ctx.BackendClient)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 02:50:29 +02:00
|
|
|
if err != nil {
|
2018-12-18 22:25:52 +01:00
|
|
|
contract.IgnoreClose(plugctx)
|
Implement first-class providers. (#1695)
### First-Class Providers
These changes implement support for first-class providers. First-class
providers are provider plugins that are exposed as resources via the
Pulumi programming model so that they may be explicitly and multiply
instantiated. Each instance of a provider resource may be configured
differently, and configuration parameters may be source from the
outputs of other resources.
### Provider Plugin Changes
In order to accommodate the need to verify and diff provider
configuration and configure providers without complete configuration
information, these changes adjust the high-level provider plugin
interface. Two new methods for validating a provider's configuration
and diffing changes to the same have been added (`CheckConfig` and
`DiffConfig`, respectively), and the type of the configuration bag
accepted by `Configure` has been changed to a `PropertyMap`.
These changes have not yet been reflected in the provider plugin gRPC
interface. We will do this in a set of follow-up changes. Until then,
these methods are implemented by adapters:
- `CheckConfig` validates that all configuration parameters are string
or unknown properties. This is necessary because existing plugins
only accept string-typed configuration values.
- `DiffConfig` either returns "never replace" if all configuration
values are known or "must replace" if any configuration value is
unknown. The justification for this behavior is given
[here](https://github.com/pulumi/pulumi/pull/1695/files#diff-a6cd5c7f337665f5bb22e92ca5f07537R106)
- `Configure` converts the config bag to a legacy config map and
configures the provider plugin if all config values are known. If any
config value is unknown, the underlying plugin is not configured and
the provider may only perform `Check`, `Read`, and `Invoke`, all of
which return empty results. We justify this behavior becuase it is
only possible during a preview and provides the best experience we
can manage with the existing gRPC interface.
### Resource Model Changes
Providers are now exposed as resources that participate in a stack's
dependency graph. Like other resources, they are explicitly created,
may have multiple instances, and may have dependencies on other
resources. Providers are referred to using provider references, which
are a combination of the provider's URN and its ID. This design
addresses the need during a preview to refer to providers that have not
yet been physically created and therefore have no ID.
All custom resources that are not themselves providers must specify a
single provider via a provider reference. The named provider will be
used to manage that resource's CRUD operations. If a resource's
provider reference changes, the resource must be replaced. Though its
URN is not present in the resource's dependency list, the provider
should be treated as a dependency of the resource when topologically
sorting the dependency graph.
Finally, `Invoke` operations must now specify a provider to use for the
invocation via a provider reference.
### Engine Changes
First-class providers support requires a few changes to the engine:
- The engine must have some way to map from provider references to
provider plugins. It must be possible to add providers from a stack's
checkpoint to this map and to register new/updated providers during
the execution of a plan in response to CRUD operations on provider
resources.
- In order to support updating existing stacks using existing Pulumi
programs that may not explicitly instantiate providers, the engine
must be able to manage the "default" providers for each package
referenced by a checkpoint or Pulumi program. The configuration for
a "default" provider is taken from the stack's configuration data.
The former need is addressed by adding a provider registry type that is
responsible for managing all of the plugins required by a plan. In
addition to loading plugins froma checkpoint and providing the ability
to map from a provider reference to a provider plugin, this type serves
as the provider plugin for providers themselves (i.e. it is the
"provider provider").
The latter need is solved via two relatively self-contained changes to
plan setup and the eval source.
During plan setup, the old checkpoint is scanned for custom resources
that do not have a provider reference in order to compute the set of
packages that require a default provider. Once this set has been
computed, the required default provider definitions are conjured and
prepended to the checkpoint's resource list. Each resource that
requires a default provider is then updated to refer to the default
provider for its package.
While an eval source is running, each custom resource registration,
resource read, and invoke that does not name a provider is trapped
before being returned by the source iterator. If no default provider
for the appropriate package has been registered, the eval source
synthesizes an appropriate registration, waits for it to complete, and
records the registered provider's reference. This reference is injected
into the original request, which is then processed as usual. If a
default provider was already registered, the recorded reference is
used and no new registration occurs.
### SDK Changes
These changes only expose first-class providers from the Node.JS SDK.
- A new abstract class, `ProviderResource`, can be subclassed and used
to instantiate first-class providers.
- A new field in `ResourceOptions`, `provider`, can be used to supply
a particular provider instance to manage a `CustomResource`'s CRUD
operations.
- A new type, `InvokeOptions`, can be used to specify options that
control the behavior of a call to `pulumi.runtime.invoke`. This type
includes a `provider` field that is analogous to
`ResourceOptions.provider`.
2018-08-07 02:50:29 +02:00
|
|
|
return nil, err
|
|
|
|
}
|
2017-08-23 01:56:15 +02:00
|
|
|
return &planResult{
|
2018-05-18 23:58:06 +02:00
|
|
|
Ctx: info,
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
Plugctx: plugctx,
|
2017-10-05 23:08:46 +02:00
|
|
|
Plan: plan,
|
|
|
|
Options: opts,
|
2017-08-23 01:56:15 +02:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
type planResult struct {
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
Ctx *planContext // plan context information.
|
|
|
|
Plugctx *plugin.Context // the context containing plugins and their state.
|
2017-09-17 17:10:46 +02:00
|
|
|
Plan *deploy.Plan // the plan created by this command.
|
General prep work for refresh
This change includes a bunch of refactorings I made in prep for
doing refresh (first, the command, see pulumi/pulumi#1081):
* The primary change is to change the way the engine's core update
functionality works with respect to deploy.Source. This is the
way we can plug in new sources of resource information during
planning (and, soon, diffing). The way I intend to model refresh
is by having a new kind of source, deploy.RefreshSource, which
will let us do virtually everything about an update/diff the same
way with refreshes, which avoid otherwise duplicative effort.
This includes changing the planOptions (nee deployOptions) to
take a new SourceFunc callback, which is responsible for creating
a source specific to the kind of plan being requested.
Preview, Update, and Destroy now are primarily differentiated by
the kind of deploy.Source that they return, rather than sprinkling
things like `if Destroying` throughout. This tidies up some logic
and, more importantly, gives us precisely the refresh hook we need.
* Originally, we used the deploy.NullSource for Destroy operations.
This simply returns nothing, which is how Destroy works. For some
reason, we were no longer doing this, and instead had some
`if Destroying` cases sprinkled throughout the deploy.EvalSource.
I think this is a vestige of some old way we did configuration, at
least judging by a comment, which is apparently no longer relevant.
* Move diff and diff-printing logic within the engine into its own
pkg/engine/diff.go file, to prepare for upcoming work.
* I keep noticing benign diffs anytime I regenerate protobufs. I
suspect this is because we're also on different versions. I changed
generate.sh to also dump the version into grpc_version.txt. At
least we can understand where the diffs are coming from, decide
whether to take them (i.e., a newer version), and ensure that as
a team we are monotonically increasing, and not going backwards.
* I also tidied up some tiny things I noticed while in there, like
comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
|
|
|
Options planOptions // the options used during planning.
|
2017-09-17 17:10:46 +02:00
|
|
|
}
|
|
|
|
|
2017-12-12 21:31:09 +01:00
|
|
|
// Chdir changes the directory so that all operations from now on are relative to the project we are working with.
|
|
|
|
// It returns a function that, when run, restores the old working directory.
|
2019-03-13 22:00:01 +01:00
|
|
|
func (planResult *planResult) Chdir() (func(), error) {
|
2019-04-30 19:37:17 +02:00
|
|
|
return fsutil.Chdir(planResult.Plugctx.Pwd)
|
2017-12-12 21:31:09 +01:00
|
|
|
}
|
|
|
|
|
2017-10-02 23:26:51 +02:00
|
|
|
// Walk enumerates all steps in the plan, calling out to the provided action at each step. It returns four things: the
|
|
|
|
// resulting Snapshot, no matter whether an error occurs or not; an error, if something went wrong; the step that
|
|
|
|
// failed, if the error is non-nil; and finally the state of the resource modified in the failing step.
|
2019-03-20 00:21:50 +01:00
|
|
|
func (planResult *planResult) Walk(cancelCtx *Context, events deploy.Events, preview bool) result.Result {
|
2018-08-08 22:45:48 +02:00
|
|
|
ctx, cancelFunc := context.WithCancel(context.Background())
|
2017-08-23 01:56:15 +02:00
|
|
|
|
2018-04-20 03:59:14 +02:00
|
|
|
done := make(chan bool)
|
2019-03-20 00:21:50 +01:00
|
|
|
var walkResult result.Result
|
2018-04-20 03:59:14 +02:00
|
|
|
go func() {
|
2018-08-21 23:05:00 +02:00
|
|
|
opts := deploy.Options{
|
2018-09-28 00:49:08 +02:00
|
|
|
Events: events,
|
2019-03-13 22:00:01 +01:00
|
|
|
Parallel: planResult.Options.Parallel,
|
|
|
|
Refresh: planResult.Options.Refresh,
|
|
|
|
RefreshOnly: planResult.Options.isRefresh,
|
|
|
|
TrustDependencies: planResult.Options.trustDependencies,
|
2019-07-01 21:34:19 +02:00
|
|
|
UseLegacyDiff: planResult.Options.UseLegacyDiff,
|
2018-08-21 23:05:00 +02:00
|
|
|
}
|
2019-03-20 00:21:50 +01:00
|
|
|
walkResult = planResult.Plan.Execute(ctx, opts, preview)
|
2018-08-07 01:46:17 +02:00
|
|
|
close(done)
|
2018-04-20 03:59:14 +02:00
|
|
|
}()
|
2017-10-02 23:26:51 +02:00
|
|
|
|
2018-07-12 06:20:26 +02:00
|
|
|
// Asynchronously listen for cancellation, and deliver that signal to plan.
|
|
|
|
go func() {
|
|
|
|
select {
|
2018-08-07 01:46:17 +02:00
|
|
|
case <-cancelCtx.Cancel.Canceled():
|
2018-08-21 23:05:00 +02:00
|
|
|
// Cancel the plan's execution context, so it begins to shut down.
|
2018-08-07 01:46:17 +02:00
|
|
|
cancelFunc()
|
2018-07-12 06:20:26 +02:00
|
|
|
case <-done:
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2018-04-20 03:59:14 +02:00
|
|
|
select {
|
2018-08-07 01:46:17 +02:00
|
|
|
case <-cancelCtx.Cancel.Terminated():
|
2019-03-20 00:21:50 +01:00
|
|
|
return result.WrapIfNonNil(cancelCtx.Cancel.TerminateErr())
|
2018-04-20 03:59:14 +02:00
|
|
|
|
|
|
|
case <-done:
|
2019-03-20 00:21:50 +01:00
|
|
|
return walkResult
|
2018-04-20 03:59:14 +02:00
|
|
|
}
|
2017-10-02 23:26:51 +02:00
|
|
|
}
|
|
|
|
|
2019-03-13 22:00:01 +01:00
|
|
|
func (planResult *planResult) Close() error {
|
|
|
|
return planResult.Plugctx.Close()
|
2017-10-02 23:26:51 +02:00
|
|
|
}
|
|
|
|
|
2018-01-20 21:15:28 +01:00
|
|
|
// printPlan prints the plan's result to the plan's Options.Events stream.
|
2019-03-20 00:21:50 +01:00
|
|
|
func printPlan(ctx *Context, planResult *planResult, dryRun bool) (ResourceChanges, result.Result) {
|
2019-03-13 22:00:01 +01:00
|
|
|
planResult.Options.Events.preludeEvent(dryRun, planResult.Ctx.Update.GetTarget().Config)
|
2017-10-02 23:26:51 +02:00
|
|
|
|
2018-01-31 22:07:40 +01:00
|
|
|
// Walk the plan's steps and and pretty-print them out.
|
2019-03-13 22:00:01 +01:00
|
|
|
actions := newPlanActions(planResult.Options)
|
2019-03-20 00:21:50 +01:00
|
|
|
if res := planResult.Walk(ctx, actions, true); res != nil {
|
|
|
|
if res.IsBail() {
|
|
|
|
return nil, res
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil, result.Error("an error occurred while advancing the preview")
|
2017-08-23 01:56:15 +02:00
|
|
|
}
|
|
|
|
|
2018-02-02 06:15:09 +01:00
|
|
|
// Emit an event with a summary of operation counts.
|
2018-01-25 03:22:41 +01:00
|
|
|
changes := ResourceChanges(actions.Ops)
|
2019-03-13 22:00:01 +01:00
|
|
|
planResult.Options.Events.previewSummaryEvent(changes)
|
2018-01-25 03:22:41 +01:00
|
|
|
return changes, nil
|
2017-08-23 01:56:15 +02:00
|
|
|
}
|
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
type planActions struct {
|
|
|
|
Ops map[deploy.StepOp]int
|
|
|
|
Opts planOptions
|
|
|
|
Seen map[resource.URN]deploy.Step
|
2018-08-07 01:46:17 +02:00
|
|
|
MapLock sync.Mutex
|
2018-05-18 23:58:06 +02:00
|
|
|
}
|
|
|
|
|
Implement more precise delete-before-replace semantics. (#2369)
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
2019-01-28 18:46:30 +01:00
|
|
|
func shouldReportStep(step deploy.Step, opts planOptions) bool {
|
|
|
|
return step.Op() != deploy.OpRemovePendingReplace &&
|
|
|
|
(opts.reportDefaultProviderSteps || !isDefaultProviderStep(step))
|
|
|
|
}
|
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
func newPlanActions(opts planOptions) *planActions {
|
|
|
|
return &planActions{
|
|
|
|
Ops: make(map[deploy.StepOp]int),
|
|
|
|
Opts: opts,
|
|
|
|
Seen: make(map[resource.URN]deploy.Step),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (acts *planActions) OnResourceStepPre(step deploy.Step) (interface{}, error) {
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Lock()
|
2018-05-18 23:58:06 +02:00
|
|
|
acts.Seen[step.URN()] = step
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Unlock()
|
2018-08-07 20:19:47 +02:00
|
|
|
|
Implement more precise delete-before-replace semantics. (#2369)
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
2019-01-28 18:46:30 +01:00
|
|
|
// Skip reporting if necessary.
|
|
|
|
if !shouldReportStep(step, acts.Opts) {
|
2018-08-09 23:45:39 +02:00
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
2018-08-14 00:45:08 +02:00
|
|
|
acts.Opts.Events.resourcePreEvent(step, true /*planning*/, acts.Opts.Debug)
|
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (acts *planActions) OnResourceStepPost(ctx interface{},
|
|
|
|
step deploy.Step, status resource.Status, err error) error {
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Lock()
|
2018-05-18 23:58:06 +02:00
|
|
|
assertSeen(acts.Seen, step)
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Unlock()
|
2018-05-18 23:58:06 +02:00
|
|
|
|
Implement more precise delete-before-replace semantics. (#2369)
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
2019-01-28 18:46:30 +01:00
|
|
|
reportStep := shouldReportStep(step, acts.Opts)
|
2018-08-09 23:45:39 +02:00
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
if err != nil {
|
2018-08-09 23:45:39 +02:00
|
|
|
// We always want to report a failure. If we intend to elide this step overall, though, we report it as a
|
|
|
|
// global message.
|
|
|
|
reportedURN := resource.URN("")
|
|
|
|
if reportStep {
|
|
|
|
reportedURN = step.URN()
|
|
|
|
}
|
|
|
|
|
2018-08-31 21:33:01 +02:00
|
|
|
acts.Opts.Diag.Errorf(diag.GetPreviewFailedError(reportedURN), err)
|
2018-08-09 23:45:39 +02:00
|
|
|
} else if reportStep {
|
2018-08-23 02:52:46 +02:00
|
|
|
op, record := step.Op(), step.Logical()
|
|
|
|
if acts.Opts.isRefresh && op == deploy.OpRefresh {
|
|
|
|
// Refreshes are handled specially.
|
|
|
|
op, record = step.(*deploy.RefreshStep).ResultOp(), true
|
|
|
|
}
|
2018-08-09 23:45:39 +02:00
|
|
|
|
2018-12-19 22:19:56 +01:00
|
|
|
if step.Op() == deploy.OpRead {
|
|
|
|
record = ShouldRecordReadStep(step)
|
2018-12-19 01:33:13 +01:00
|
|
|
}
|
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
// Track the operation if shown and/or if it is a logically meaningful operation.
|
2018-08-23 02:52:46 +02:00
|
|
|
if record {
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Lock()
|
2018-08-23 02:52:46 +02:00
|
|
|
acts.Ops[op]++
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Unlock()
|
2018-05-18 23:58:06 +02:00
|
|
|
}
|
|
|
|
|
2018-08-23 02:52:46 +02:00
|
|
|
acts.Opts.Events.resourceOutputsEvent(op, step, true /*planning*/, acts.Opts.Debug)
|
2018-05-18 23:58:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-12-19 22:19:56 +01:00
|
|
|
func ShouldRecordReadStep(step deploy.Step) bool {
|
|
|
|
contract.Assertf(step.Op() == deploy.OpRead, "Only call this on a Read step")
|
|
|
|
|
|
|
|
// If reading a resource didn't result in any change to the resource, we then want to
|
|
|
|
// record this as a 'same'. That way, when things haven't actually changed, but a user
|
|
|
|
// app did any 'reads' these don't show up in the resource summary at the end.
|
|
|
|
return step.Old() != nil &&
|
|
|
|
step.New() != nil &&
|
|
|
|
step.Old().Outputs != nil &&
|
|
|
|
step.New().Outputs != nil &&
|
|
|
|
step.Old().Outputs.Diff(step.New().Outputs) != nil
|
|
|
|
}
|
|
|
|
|
2018-05-18 23:58:06 +02:00
|
|
|
func (acts *planActions) OnResourceOutputs(step deploy.Step) error {
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Lock()
|
2018-05-18 23:58:06 +02:00
|
|
|
assertSeen(acts.Seen, step)
|
2018-08-07 01:46:17 +02:00
|
|
|
acts.MapLock.Unlock()
|
2018-08-09 23:45:39 +02:00
|
|
|
|
Implement more precise delete-before-replace semantics. (#2369)
This implements the new algorithm for deciding which resources must be
deleted due to a delete-before-replace operation.
We need to compute the set of resources that may be replaced by a
change to the resource under consideration. We do this by taking the
complete set of transitive dependents on the resource under
consideration and removing any resources that would not be replaced by
changes to their dependencies. We determine whether or not a resource
may be replaced by substituting unknowns for input properties that may
change due to deletion of the resources their value depends on and
calling the resource provider's Diff method.
This is perhaps clearer when described by example. Consider the
following dependency graph:
A
__|__
B C
| _|_
D E F
In this graph, all of B, C, D, E, and F transitively depend on A. It may
be the case, however, that changes to the specific properties of any of
those resources R that would occur if a resource on the path to A were
deleted and recreated may not cause R to be replaced. For example, the
edge from B to A may be a simple dependsOn edge such that a change to
B does not actually influence any of B's input properties. In that case,
neither B nor D would need to be deleted before A could be deleted.
In order to make the above algorithm a reality, the resource monitor
interface has been updated to include a map that associates an input
property key with the list of resources that input property depends on.
Older clients of the resource monitor will leave this map empty, in
which case all input properties will be treated as depending on all
dependencies of the resource. This is probably overly conservative, but
it is less conservative than what we currently implement, and is
certainly correct.
2019-01-28 18:46:30 +01:00
|
|
|
// Skip reporting if necessary.
|
|
|
|
if !shouldReportStep(step, acts.Opts) {
|
2018-08-09 23:45:39 +02:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-08-23 02:52:46 +02:00
|
|
|
// Print the resource outputs separately.
|
|
|
|
acts.Opts.Events.resourceOutputsEvent(step.Op(), step, true /*planning*/, acts.Opts.Debug)
|
2018-05-18 23:58:06 +02:00
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-06-11 00:20:44 +02:00
|
|
|
func (acts *planActions) OnPolicyViolation(urn resource.URN, d plugin.AnalyzeDiagnostic) {
|
|
|
|
acts.Opts.Events.policyViolationEvent(urn, d)
|
|
|
|
}
|
|
|
|
|
2018-02-03 01:02:50 +01:00
|
|
|
func assertSeen(seen map[resource.URN]deploy.Step, step deploy.Step) {
|
|
|
|
_, has := seen[step.URN()]
|
|
|
|
contract.Assertf(has, "URN '%v' had not been marked as seen", step.URN())
|
|
|
|
}
|
2018-08-09 23:45:39 +02:00
|
|
|
|
|
|
|
func isDefaultProviderStep(step deploy.Step) bool {
|
Refine resource replacement logic for providers (#2767)
This commit touches an intersection of a few different provider-oriented
features that combined to cause a particularly severe bug that made it
impossible for users to upgrade provider versions without seeing
replacements with their resources.
For some context, Pulumi models all providers as resources and places
them in the snapshot like any other resource. Every resource has a
reference to the provider that created it. If a Pulumi program does not
specify a particular provider to use when performing a resource
operation, the Pulumi engine injects one automatically; these are called
"default providers" and are the most common ways that users end up with
providers in their snapshot. Default providers can be identified by
their name, which is always prefixed with "default".
Recently, in an effort to make the Pulumi engine more flexible with
provider versions, it was made possible for the engine to have multiple
default providers active for a provider of a particular type, which was
previously not possible. Because a provider is identified as a tuple of
package name and version, it was difficult to find a name for these
duplicate default providers that did not cause additional problems. The
provider versioning PR gave these default providers a name that was
derived from the version of the package. This proved to be a problem,
because when users upgraded from one version of a package to another,
this changed the name of their default provider which in turn caused all
of their resources created using that provider (read: everything) to be
replaced.
To combat this, this PR introduces a rule that the engine will apply
when diffing a resource to determine whether or not it needs to be
replaced: "If a resource's provider changes, and both old and new
providers are default providers whose properties do not require
replacement, proceed as if there were no diff." This allows the engine
to gracefully recognize and recover when a resource's default provider changes
names, as long as the provider's config has not changed.
2019-06-03 21:16:31 +02:00
|
|
|
return providers.IsDefaultProvider(step.URN())
|
2018-08-09 23:45:39 +02:00
|
|
|
}
|