2018-03-21 20:43:21 +01:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation. All rights reserved.
|
2017-06-10 03:34:37 +02:00
|
|
|
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
package deploy
|
2017-06-10 03:34:37 +02:00
|
|
|
|
|
|
|
import (
|
2017-11-21 02:38:09 +01:00
|
|
|
"reflect"
|
2017-10-10 03:02:27 +02:00
|
|
|
"time"
|
2017-08-31 23:31:33 +02:00
|
|
|
|
2017-06-10 03:34:37 +02:00
|
|
|
"github.com/golang/glog"
|
2018-02-02 05:23:26 +01:00
|
|
|
"github.com/pkg/errors"
|
2017-06-10 03:34:37 +02:00
|
|
|
|
2018-02-02 05:23:26 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/diag"
|
2017-09-22 04:18:21 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource"
|
|
|
|
"github.com/pulumi/pulumi/pkg/resource/plugin"
|
|
|
|
"github.com/pulumi/pulumi/pkg/tokens"
|
|
|
|
"github.com/pulumi/pulumi/pkg/util/contract"
|
2018-04-17 08:04:56 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/version"
|
|
|
|
"github.com/pulumi/pulumi/pkg/workspace"
|
2017-06-10 03:34:37 +02:00
|
|
|
)
|
|
|
|
|
2017-09-16 01:38:52 +02:00
|
|
|
// Options controls the planning and deployment process.
|
|
|
|
type Options struct {
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
Events Events // an optional events callback interface.
|
|
|
|
Parallel int // the degree of parallelism for resource operations (<=1 for serial).
|
|
|
|
}
|
|
|
|
|
|
|
|
// Events is an interface that can be used to hook interesting engine/planning events.
|
|
|
|
type Events interface {
|
|
|
|
OnResourceStepPre(step Step) (interface{}, error)
|
2017-11-30 00:05:58 +01:00
|
|
|
OnResourceStepPost(ctx interface{}, step Step, status resource.Status, err error) error
|
|
|
|
OnResourceOutputs(step Step) error
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
2017-08-31 23:31:33 +02:00
|
|
|
// Start initializes and returns an iterator that can be used to step through a plan's individual steps.
|
2017-09-16 01:38:52 +02:00
|
|
|
func (p *Plan) Start(opts Options) (*PlanIterator, error) {
|
2018-03-07 01:09:42 +01:00
|
|
|
// Ask the source for its iterator.
|
2017-09-16 01:38:52 +02:00
|
|
|
src, err := p.source.Iterate(opts)
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create an iterator that can be used to perform the planning process.
|
|
|
|
return &PlanIterator{
|
2017-12-10 17:37:22 +01:00
|
|
|
p: p,
|
|
|
|
opts: opts,
|
|
|
|
src: src,
|
|
|
|
urns: make(map[resource.URN]bool),
|
|
|
|
creates: make(map[resource.URN]bool),
|
|
|
|
updates: make(map[resource.URN]bool),
|
|
|
|
replaces: make(map[resource.URN]bool),
|
|
|
|
deletes: make(map[resource.URN]bool),
|
|
|
|
sames: make(map[resource.URN]bool),
|
|
|
|
pendingNews: make(map[resource.URN]Step),
|
|
|
|
dones: make(map[*resource.State]bool),
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
}, nil
|
|
|
|
}
|
2017-06-10 03:34:37 +02:00
|
|
|
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
// PlanSummary is an interface for summarizing the progress of a plan.
|
|
|
|
type PlanSummary interface {
|
|
|
|
Steps() int
|
|
|
|
Creates() map[resource.URN]bool
|
|
|
|
Updates() map[resource.URN]bool
|
|
|
|
Replaces() map[resource.URN]bool
|
|
|
|
Deletes() map[resource.URN]bool
|
|
|
|
Sames() map[resource.URN]bool
|
|
|
|
Resources() []*resource.State
|
2018-04-17 08:04:56 +02:00
|
|
|
Snap() *Snapshot
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// PlanIterator can be used to step through and/or execute a plan's proposed actions.
|
|
|
|
type PlanIterator struct {
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
p *Plan // the plan to which this iterator belongs.
|
|
|
|
opts Options // the options this iterator was created with.
|
|
|
|
src SourceIterator // the iterator that fetches source resources.
|
2017-06-10 03:34:37 +02:00
|
|
|
|
2017-06-27 22:04:06 +02:00
|
|
|
urns map[resource.URN]bool // URNs discovered.
|
2017-06-10 03:34:37 +02:00
|
|
|
creates map[resource.URN]bool // URNs discovered to be created.
|
|
|
|
updates map[resource.URN]bool // URNs discovered to be updated.
|
|
|
|
replaces map[resource.URN]bool // URNs discovered to be replaced.
|
|
|
|
deletes map[resource.URN]bool // URNs discovered to be deleted.
|
|
|
|
sames map[resource.URN]bool // URNs discovered to be the same.
|
|
|
|
|
2017-12-10 17:37:22 +01:00
|
|
|
pendingNews map[resource.URN]Step // a map of logical steps currently active.
|
|
|
|
|
2017-11-30 00:05:58 +01:00
|
|
|
stepqueue []Step // a queue of steps to drain.
|
2017-12-10 17:37:22 +01:00
|
|
|
delqueue []Step // a queue of deletes left to perform.
|
2017-11-30 00:05:58 +01:00
|
|
|
resources []*resource.State // the resulting ordered resource states.
|
|
|
|
dones map[*resource.State]bool // true for each old state we're done with.
|
2017-06-10 03:34:37 +02:00
|
|
|
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
srcdone bool // true if the source interpreter has been run to completion.
|
|
|
|
done bool // true if the planning and associated iteration has finished.
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
func (iter *PlanIterator) Plan() *Plan { return iter.p }
|
|
|
|
func (iter *PlanIterator) Steps() int {
|
|
|
|
return len(iter.creates) + len(iter.updates) + len(iter.replaces) + len(iter.deletes)
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
func (iter *PlanIterator) Creates() map[resource.URN]bool { return iter.creates }
|
|
|
|
func (iter *PlanIterator) Updates() map[resource.URN]bool { return iter.updates }
|
|
|
|
func (iter *PlanIterator) Replaces() map[resource.URN]bool { return iter.replaces }
|
|
|
|
func (iter *PlanIterator) Deletes() map[resource.URN]bool { return iter.deletes }
|
|
|
|
func (iter *PlanIterator) Sames() map[resource.URN]bool { return iter.sames }
|
|
|
|
func (iter *PlanIterator) Resources() []*resource.State { return iter.resources }
|
2017-06-12 16:16:08 +02:00
|
|
|
func (iter *PlanIterator) Dones() map[*resource.State]bool { return iter.dones }
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
func (iter *PlanIterator) Done() bool { return iter.done }
|
2017-06-10 03:34:37 +02:00
|
|
|
|
2017-11-21 02:38:09 +01:00
|
|
|
// Apply performs a plan's step and records its result in the iterator's state.
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
func (iter *PlanIterator) Apply(step Step, preview bool) (resource.Status, error) {
|
2017-11-21 02:38:09 +01:00
|
|
|
urn := step.URN()
|
|
|
|
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
// If there is a pre-event, raise it.
|
|
|
|
var eventctx interface{}
|
|
|
|
if e := iter.opts.Events; e != nil {
|
|
|
|
var eventerr error
|
|
|
|
eventctx, eventerr = e.OnResourceStepPre(step)
|
|
|
|
if eventerr != nil {
|
2018-02-02 05:23:26 +01:00
|
|
|
return resource.StatusOK, errors.Wrapf(eventerr, "pre-step event returned an error")
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-11-21 02:38:09 +01:00
|
|
|
// Apply the step.
|
2017-11-29 17:36:04 +01:00
|
|
|
glog.V(9).Infof("Applying step %v on %v (preview %v)", step.Op(), urn, preview)
|
2017-11-30 00:05:58 +01:00
|
|
|
status, err := step.Apply(preview)
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
|
2017-11-29 17:36:04 +01:00
|
|
|
// If there is no error, proceed to save the state; otherwise, go straight to the exit codepath.
|
|
|
|
if err == nil {
|
2017-12-07 18:44:38 +01:00
|
|
|
// If we have a state object, and this is a create or update, remember it, as we may need to update it later.
|
|
|
|
if step.Logical() && step.New() != nil {
|
2017-12-10 17:37:22 +01:00
|
|
|
if prior, has := iter.pendingNews[urn]; has {
|
2017-12-07 18:44:38 +01:00
|
|
|
return resource.StatusOK,
|
2018-02-02 05:23:26 +01:00
|
|
|
errors.Errorf("resource '%s' registered twice (%s and %s)", urn, prior.Op(), step.Op())
|
2017-11-29 17:36:04 +01:00
|
|
|
}
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
|
2017-12-10 17:37:22 +01:00
|
|
|
iter.pendingNews[urn] = step
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
}
|
2017-11-21 02:38:09 +01:00
|
|
|
}
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
|
2017-11-29 17:36:04 +01:00
|
|
|
// If there is a post-event, raise it, and in any case, return the results.
|
|
|
|
if e := iter.opts.Events; e != nil {
|
2017-11-30 00:05:58 +01:00
|
|
|
if eventerr := e.OnResourceStepPost(eventctx, step, status, err); eventerr != nil {
|
2018-02-02 05:23:26 +01:00
|
|
|
return status, errors.Wrapf(eventerr, "post-step event returned an error")
|
2017-11-29 17:36:04 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-03-10 00:43:16 +01:00
|
|
|
// At this point, if err is not nil, we've already issued an error message through our
|
|
|
|
// diag subsystem and we need to bail.
|
|
|
|
//
|
|
|
|
// This error message is ultimately what's going to be presented to the user at the top
|
|
|
|
// level, so the message here is intentionally vague; we should have already presented
|
|
|
|
// a more specific error message.
|
|
|
|
if err != nil {
|
|
|
|
if preview {
|
|
|
|
return status, errors.New("preview failed")
|
|
|
|
}
|
|
|
|
|
|
|
|
return status, errors.New("update failed")
|
|
|
|
}
|
|
|
|
|
|
|
|
return status, nil
|
2017-11-21 02:38:09 +01:00
|
|
|
}
|
|
|
|
|
2017-06-21 19:31:06 +02:00
|
|
|
// Close terminates the iteration of this plan.
|
|
|
|
func (iter *PlanIterator) Close() error {
|
|
|
|
return iter.src.Close()
|
|
|
|
}
|
|
|
|
|
2017-06-10 03:34:37 +02:00
|
|
|
// Next advances the plan by a single step, and returns the next step to be performed. In doing so, it will perform
|
|
|
|
// evaluation of the program as much as necessary to determine the next step. If there is no further action to be
|
|
|
|
// taken, Next will return a nil step pointer.
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 02:24:00 +02:00
|
|
|
func (iter *PlanIterator) Next() (Step, error) {
|
2017-11-21 02:38:09 +01:00
|
|
|
outer:
|
2017-06-10 03:34:37 +02:00
|
|
|
for !iter.done {
|
2017-08-06 19:05:51 +02:00
|
|
|
if len(iter.stepqueue) > 0 {
|
|
|
|
step := iter.stepqueue[0]
|
|
|
|
iter.stepqueue = iter.stepqueue[1:]
|
|
|
|
return step, nil
|
2017-08-30 03:24:12 +02:00
|
|
|
} else if !iter.srcdone {
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
event, err := iter.src.Next()
|
2017-06-10 03:34:37 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
} else if event != nil {
|
|
|
|
// If we have an event, drive the behavior based on which kind it is.
|
|
|
|
switch e := event.(type) {
|
2017-11-29 20:27:32 +01:00
|
|
|
case RegisterResourceEvent:
|
2017-11-21 02:38:09 +01:00
|
|
|
// If the intent is to register a resource, compute the plan steps necessary to do so.
|
2017-11-29 20:27:32 +01:00
|
|
|
steps, steperr := iter.makeRegisterResouceSteps(e)
|
2017-11-30 19:45:49 +01:00
|
|
|
if steperr != nil {
|
2017-11-21 02:38:09 +01:00
|
|
|
return nil, steperr
|
|
|
|
}
|
|
|
|
contract.Assert(len(steps) > 0)
|
|
|
|
if len(steps) > 1 {
|
|
|
|
iter.stepqueue = steps[1:]
|
|
|
|
}
|
|
|
|
return steps[0], nil
|
2017-11-29 20:27:32 +01:00
|
|
|
case RegisterResourceOutputsEvent:
|
2017-11-21 02:38:09 +01:00
|
|
|
// If the intent is to complete a prior resource registration, do so. We do this by just
|
|
|
|
// processing the request from the existing state, and do not expose our callers to it.
|
2017-11-29 20:27:32 +01:00
|
|
|
if err := iter.registerResourceOutputs(e); err != nil {
|
2017-11-21 02:38:09 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
continue outer
|
|
|
|
default:
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
contract.Failf("Unrecognized intent from source iterator: %v", reflect.TypeOf(event))
|
2017-08-06 19:05:51 +02:00
|
|
|
}
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 02:24:00 +02:00
|
|
|
|
|
|
|
// If all returns are nil, the source is done, note it, and don't go back for more. Add any deletions to be
|
|
|
|
// performed, and then keep going 'round the next iteration of the loop so we can wrap up the planning.
|
|
|
|
iter.srcdone = true
|
2017-11-21 02:38:09 +01:00
|
|
|
iter.delqueue = iter.computeDeletes()
|
2017-06-10 03:34:37 +02:00
|
|
|
} else {
|
|
|
|
// The interpreter has finished, so we need to now drain any deletions that piled up.
|
Implement `get` functions on all resources
This change implements the `get` function for resources. Per pulumi/lumi#83,
this allows Lumi scripts to actually read from the target environment.
For example, we can now look up a SecurityGroup from its ARN:
let group = aws.ec2.SecurityGroup.get(
"arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79");
The returned object is a fully functional resource object. So, we can then
link it up with an EC2 instance, for example, in the usual ways:
let instance = new aws.ec2.Instance(..., {
securityGroups: [ group ],
});
This didn't require any changes to the RPC or provider model, since we
already implement the Get function.
There are a few loose ends; two are short term:
1) URNs are not rehydrated.
2) Query is not yet implemented.
One is mid-term:
3) We probably want a URN-based lookup function. But we will likely
wait until we tackle pulumi/lumi#109 before adding this.
And one is long term (and subtle):
4) These amount to I/O and are not repeatable! A change in the target
environment may cause a script to generate a different plan
intermittently. Most likely we want to apply a different kind of
deployment "policy" for such scripts. These are inching towards the
scripting model of pulumi/lumi#121, which is an entirely different
beast than the repeatable immutable infrastructure deployments.
Finally, it is worth noting that with this, we have some of the fundamental
underpinnings required to finally tackle "inference" (pulumi/lumi#142).
2017-06-20 02:24:00 +02:00
|
|
|
if step := iter.nextDeleteStep(); step != nil {
|
2017-06-10 03:34:37 +02:00
|
|
|
return step, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, if the deletes have quiesced, there is nothing remaining in this plan; leave.
|
|
|
|
iter.done = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
2017-11-29 20:27:32 +01:00
|
|
|
// makeRegisterResouceSteps produces one or more steps required to achieve the desired resource goal state, or nil if
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
// there aren't any steps to perform (in other words, the actual known state is equivalent to the goal state). It is
|
2017-08-30 03:24:12 +02:00
|
|
|
// possible to return multiple steps if the current resource state necessitates it (e.g., replacements).
|
2017-11-29 20:27:32 +01:00
|
|
|
func (iter *PlanIterator) makeRegisterResouceSteps(e RegisterResourceEvent) ([]Step, error) {
|
2017-08-01 03:26:15 +02:00
|
|
|
var invalid bool // will be set to true if this object fails validation.
|
2017-06-27 22:04:06 +02:00
|
|
|
|
2017-08-30 03:24:12 +02:00
|
|
|
// Use the resource goal state name to produce a globally unique URN.
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
goal := e.Goal()
|
2017-12-04 23:50:55 +01:00
|
|
|
parentType := tokens.Type("")
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
if p := goal.Parent; p != "" && p.Type() != resource.RootStackType {
|
2017-12-05 22:41:26 +01:00
|
|
|
// Skip empty parents and don't use the root stack type; otherwise, use the full qualified type.
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
parentType = p.QualifiedType()
|
2017-12-04 23:50:55 +01:00
|
|
|
}
|
|
|
|
|
2018-04-18 20:12:02 +02:00
|
|
|
urn := resource.NewURN(iter.p.Target().Name, iter.p.source.Project(), parentType, goal.Type, goal.Name)
|
2017-06-27 22:04:06 +02:00
|
|
|
if iter.urns[urn] {
|
|
|
|
invalid = true
|
2017-08-30 03:24:12 +02:00
|
|
|
// TODO[pulumi/pulumi-framework#19]: improve this error message!
|
2018-04-10 21:03:11 +02:00
|
|
|
iter.p.Diag().Errorf(diag.GetDuplicateResourceURNError(urn), urn)
|
2017-06-27 22:04:06 +02:00
|
|
|
}
|
|
|
|
iter.urns[urn] = true
|
2017-06-10 03:34:37 +02:00
|
|
|
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
// Check for an old resource so that we can figure out if this is a create, delete, etc., and/or to diff.
|
2018-04-05 16:00:16 +02:00
|
|
|
old, hasOld := iter.p.Olds()[urn]
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
var oldInputs resource.PropertyMap
|
|
|
|
var oldOutputs resource.PropertyMap
|
2018-04-05 16:00:16 +02:00
|
|
|
if hasOld {
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
oldInputs = old.Inputs
|
|
|
|
oldOutputs = old.Outputs
|
|
|
|
}
|
|
|
|
|
|
|
|
// Produce a new state object that we'll build up as operations are performed. Ultimately, this is what will
|
|
|
|
// get serialized into the checkpoint file. Normally there are no outputs, unless this is a refresh.
|
2018-04-18 20:12:02 +02:00
|
|
|
props, inputs, outputs, new := iter.getResourcePropertyStates(urn, goal)
|
2017-08-01 03:26:15 +02:00
|
|
|
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
// Fetch the provider for this resource type, assuming it isn't just a logical one.
|
|
|
|
var prov plugin.Provider
|
|
|
|
var err error
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
if goal.Custom {
|
2018-04-18 20:12:02 +02:00
|
|
|
if prov, err = iter.Provider(goal.Type); err != nil {
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-18 20:12:02 +02:00
|
|
|
// See if we're performing a refresh update, which takes slightly different code-paths.
|
|
|
|
refresh := iter.p.IsRefresh()
|
|
|
|
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
// We only allow unknown property values to be exposed to the provider if we are performing an update preview.
|
|
|
|
allowUnknowns := iter.p.preview && !refresh
|
2018-01-10 03:40:08 +01:00
|
|
|
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
// If this isn't a refresh, ensure the provider is okay with this resource and fetch the inputs to pass to
|
|
|
|
// subsequent methods. If these are not inputs, we are just going to blindly store the outputs, so skip this.
|
|
|
|
if prov != nil && !refresh {
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
var failures []plugin.CheckFailure
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
inputs, failures, err = prov.Check(urn, oldInputs, inputs, allowUnknowns)
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if iter.issueCheckErrors(new, urn, failures) {
|
|
|
|
invalid = true
|
|
|
|
}
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
props = inputs
|
2017-12-03 01:34:16 +01:00
|
|
|
new.Inputs = inputs
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Next, give each analyzer -- if any -- a chance to inspect the resource too.
|
|
|
|
for _, a := range iter.p.analyzers {
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
var analyzer plugin.Analyzer
|
|
|
|
analyzer, err = iter.p.ctx.Host.Analyzer(a)
|
2017-06-10 03:34:37 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2017-08-31 23:31:33 +02:00
|
|
|
} else if analyzer == nil {
|
2018-02-02 05:23:26 +01:00
|
|
|
return nil, errors.Errorf("analyzer '%v' could not be loaded from your $PATH", a)
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
var failures []plugin.AnalyzeFailure
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
failures, err = analyzer.Analyze(new.Type, props)
|
2017-06-10 03:34:37 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
for _, failure := range failures {
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
invalid = true
|
2018-04-10 21:03:11 +02:00
|
|
|
iter.p.Diag().Errorf(
|
|
|
|
diag.GetAnalyzeResourceFailureError(urn), a, urn, failure.Property, failure.Reason)
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the resource isn't valid, don't proceed any further.
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
if invalid {
|
2018-02-02 05:23:26 +01:00
|
|
|
return nil, errors.New("One or more resource validation errors occurred; refusing to proceed")
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Now decide what to do, step-wise:
|
|
|
|
//
|
|
|
|
// * If the URN exists in the old snapshot, and it has been updated,
|
|
|
|
// - Check whether the update requires replacement.
|
|
|
|
// - If yes, create a new copy, and mark it as having been replaced.
|
|
|
|
// - If no, simply update the existing resource in place.
|
|
|
|
//
|
|
|
|
// * If the URN does not exist in the old snapshot, create the resource anew.
|
|
|
|
//
|
2018-04-05 16:00:16 +02:00
|
|
|
if hasOld {
|
2017-08-30 03:24:12 +02:00
|
|
|
contract.Assert(old != nil && old.Type == new.Type)
|
2017-06-11 17:09:20 +02:00
|
|
|
|
2017-06-10 03:34:37 +02:00
|
|
|
// The resource exists in both new and old; it could be an update. This constitutes an update if the old
|
|
|
|
// and new properties don't match exactly. It is also possible we'll need to replace the resource if the
|
|
|
|
// update impact assessment says so. In this case, the resource's ID will change, which might have a
|
|
|
|
// cascading impact on subsequent updates too, since those IDs must trigger recreations, etc.
|
2018-04-16 21:29:52 +02:00
|
|
|
var diff plugin.DiffResult
|
|
|
|
if prov != nil {
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
if diff, err = prov.Diff(urn, old.ID, oldOutputs, props, allowUnknowns); err != nil {
|
2018-04-16 21:29:52 +02:00
|
|
|
return nil, err
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
2018-04-16 21:29:52 +02:00
|
|
|
}
|
2018-04-05 16:00:16 +02:00
|
|
|
|
2018-04-16 21:29:52 +02:00
|
|
|
// Determine whether the change resulted in a diff. Our legacy behavior here entailed actually performing
|
|
|
|
// diffs of state on the Pulumi side, whereas our new behavior is to defer to the provider to decide.
|
|
|
|
var hasChanges bool
|
|
|
|
switch diff.Changes {
|
|
|
|
case plugin.DiffSome:
|
|
|
|
hasChanges = true
|
|
|
|
case plugin.DiffNone:
|
|
|
|
hasChanges = false
|
|
|
|
case plugin.DiffUnknown:
|
|
|
|
// This is legacy behavior; just use the DeepEquals function to diff on the Pulumi side.
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
if refresh {
|
|
|
|
hasChanges = !oldOutputs.DeepEquals(outputs)
|
|
|
|
} else {
|
|
|
|
hasChanges = !oldInputs.DeepEquals(inputs)
|
|
|
|
}
|
2018-04-16 21:29:52 +02:00
|
|
|
default:
|
|
|
|
return nil, errors.Errorf(
|
|
|
|
"resource provider for %s replied with unrecognized diff state: %d", urn, diff.Changes)
|
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
|
2018-04-16 21:29:52 +02:00
|
|
|
// If this is an update, create the necessary step; otherwise, it's the same.
|
|
|
|
if hasChanges {
|
2017-08-01 03:26:15 +02:00
|
|
|
if diff.Replace() {
|
|
|
|
iter.replaces[urn] = true
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
|
2017-08-01 03:26:15 +02:00
|
|
|
// If we are going to perform a replacement, we need to recompute the default values. The above logic
|
|
|
|
// had assumed that we were going to carry them over from the old resource, which is no longer true.
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
if prov != nil && !refresh {
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
var failures []plugin.CheckFailure
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
inputs, failures, err = prov.Check(urn, nil, goal.Properties, allowUnknowns)
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if iter.issueCheckErrors(new, urn, failures) {
|
2018-02-02 05:23:26 +01:00
|
|
|
return nil, errors.New("One or more resource validation errors occurred; refusing to proceed")
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
}
|
2017-12-03 01:34:16 +01:00
|
|
|
new.Inputs = inputs
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
|
2017-08-01 03:26:15 +02:00
|
|
|
if glog.V(7) {
|
|
|
|
glog.V(7).Infof("Planner decided to replace '%v' (oldprops=%v inputs=%v)",
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
urn, oldInputs, new.Inputs)
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
|
2017-12-10 17:37:22 +01:00
|
|
|
// We have two approaches to performing replacements:
|
|
|
|
//
|
|
|
|
// * CreateBeforeDelete: the default mode first creates a new instance of the resource, then
|
|
|
|
// updates all dependent resources to point to the new one, and finally after all of that,
|
|
|
|
// deletes the old resource. This ensures minimal downtime.
|
|
|
|
//
|
|
|
|
// * DeleteBeforeCreate: this mode can be used for resources that cannot be tolerate having
|
|
|
|
// side-by-side old and new instances alive at once. This first deletes the resource and
|
|
|
|
// then creates the new one. This may result in downtime, so is less preferred. Note that
|
|
|
|
// until pulumi/pulumi#624 is resolved, we cannot safely perform this operation on resources
|
|
|
|
// that have dependent resources (we try to delete the resource while they refer to it).
|
|
|
|
//
|
|
|
|
// The provider is responsible for requesting which of these two modes to use.
|
|
|
|
|
|
|
|
if diff.DeleteBeforeReplace {
|
|
|
|
return []Step{
|
2018-04-17 08:04:56 +02:00
|
|
|
NewDeleteReplacementStep(iter, old, false),
|
|
|
|
NewReplaceStep(iter, old, new, diff.ReplaceKeys, false),
|
|
|
|
NewCreateReplacementStep(iter, e, old, new, diff.ReplaceKeys, false),
|
2017-12-10 17:37:22 +01:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2017-08-06 19:05:51 +02:00
|
|
|
return []Step{
|
2018-04-17 08:04:56 +02:00
|
|
|
NewCreateReplacementStep(iter, e, old, new, diff.ReplaceKeys, true),
|
|
|
|
NewReplaceStep(iter, old, new, diff.ReplaceKeys, true),
|
2017-12-10 17:37:22 +01:00
|
|
|
// note that the delete step is generated "later" on, after all creates/updates finish.
|
2017-08-06 19:05:51 +02:00
|
|
|
}, nil
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
Implement components
This change implements core support for "components" in the Pulumi
Fabric. This work is described further in pulumi/pulumi#340, where
we are still discussing some of the finer points.
In a nutshell, resources no longer imply external providers. It's
entirely possible to have a resource that logically represents
something but without having a physical manifestation that needs to
be tracked and managed by our typical CRUD operations.
For example, the aws/serverless/Function helper is one such type.
It aggregates Lambda-related resources and exposes a nice interface.
All of the Pulumi Cloud Framework resources are also examples.
To indicate that a resource does participate in the usual CRUD resource
provider, it simply derives from ExternalResource instead of Resource.
All resources now have the ability to adopt children. This is purely
a metadata/tagging thing, and will help us roll up displays, provide
attribution to the developer, and even hide aspects of the resource
graph as appropriate (e.g., when they are implementation details).
Our use of this capability is ultra limited right now; in fact, the
only place we display children is in the CLI output. For instance:
+ aws:serverless:Function: (create)
[urn=urn:pulumi:demo::serverless::aws:serverless:Function::mylambda]
=> urn:pulumi:demo::serverless::aws:iam/role:Role::mylambda-iamrole
=> urn:pulumi:demo::serverless::aws:iam/rolePolicyAttachment:RolePolicyAttachment::mylambda-iampolicy-0
=> urn:pulumi:demo::serverless::aws:lambda/function:Function::mylambda
The bit indicating whether a resource is external or not is tracked
in the resulting checkpoint file, along with any of its children.
2017-10-14 23:18:43 +02:00
|
|
|
|
|
|
|
// If we fell through, it's an update.
|
2017-08-01 03:26:15 +02:00
|
|
|
iter.updates[urn] = true
|
|
|
|
if glog.V(7) {
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
glog.V(7).Infof("Planner decided to update '%v' (oldprops=%v inputs=%v", urn, oldInputs, new.Inputs)
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
2018-04-17 08:04:56 +02:00
|
|
|
return []Step{NewUpdateStep(iter, e, old, new, diff.StableKeys)}, nil
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
2017-08-01 03:26:15 +02:00
|
|
|
// No need to update anything, the properties didn't change.
|
|
|
|
iter.sames[urn] = true
|
|
|
|
if glog.V(7) {
|
2017-12-03 01:34:16 +01:00
|
|
|
glog.V(7).Infof("Planner decided not to update '%v' (same) (inputs=%v)", urn, new.Inputs)
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
2018-04-17 08:04:56 +02:00
|
|
|
return []Step{NewSameStep(iter, e, old, new)}, nil
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, the resource isn't in the old map, so it must be a resource creation.
|
|
|
|
iter.creates[urn] = true
|
2017-12-03 01:34:16 +01:00
|
|
|
glog.V(7).Infof("Planner decided to create '%v' (inputs=%v)", urn, new.Inputs)
|
2018-04-17 08:04:56 +02:00
|
|
|
return []Step{NewCreateStep(iter, e, new)}, nil
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
|
|
|
|
2018-04-18 20:12:02 +02:00
|
|
|
// getResourcePropertyStates returns the properties, inputs, outputs, and new resource state, given a goal state.
|
|
|
|
func (iter *PlanIterator) getResourcePropertyStates(urn resource.URN, goal *resource.Goal) (resource.PropertyMap,
|
|
|
|
resource.PropertyMap, resource.PropertyMap, *resource.State) {
|
|
|
|
props := goal.Properties
|
|
|
|
var inputs resource.PropertyMap
|
|
|
|
var outputs resource.PropertyMap
|
|
|
|
if iter.p.IsRefresh() {
|
|
|
|
// In the case of a refresh, we will preserve the old inputs (since we won't have any new ones). Note
|
|
|
|
// that this can lead to a state in which inputs could not have possibly produced the outputs, but this
|
|
|
|
// will need to be reconciled manually by the programmer updating the program accordingly.
|
|
|
|
if old, ok := iter.p.Olds()[urn]; ok {
|
|
|
|
inputs = old.Inputs
|
|
|
|
}
|
|
|
|
outputs = props
|
|
|
|
} else {
|
|
|
|
// In the case of non-refreshes, outputs remain empty (they will be computed), but inputs are present.
|
|
|
|
inputs = props
|
|
|
|
}
|
|
|
|
return props, inputs, outputs,
|
|
|
|
resource.NewState(goal.Type, urn, goal.Custom, false, "",
|
|
|
|
inputs, outputs, goal.Parent, goal.Protect, goal.Dependencies)
|
|
|
|
}
|
|
|
|
|
2017-08-01 03:26:15 +02:00
|
|
|
// issueCheckErrors prints any check errors to the diagnostics sink.
|
|
|
|
func (iter *PlanIterator) issueCheckErrors(new *resource.State, urn resource.URN,
|
|
|
|
failures []plugin.CheckFailure) bool {
|
|
|
|
if len(failures) == 0 {
|
|
|
|
return false
|
|
|
|
}
|
2017-12-03 01:34:16 +01:00
|
|
|
inputs := new.Inputs
|
2017-08-01 03:26:15 +02:00
|
|
|
for _, failure := range failures {
|
|
|
|
if failure.Property != "" {
|
2018-04-10 21:03:11 +02:00
|
|
|
iter.p.Diag().Errorf(diag.GetResourcePropertyInvalidValueError(urn),
|
2017-08-30 03:24:12 +02:00
|
|
|
new.Type, urn.Name(), failure.Property, inputs[failure.Property], failure.Reason)
|
2017-08-01 03:26:15 +02:00
|
|
|
} else {
|
2018-04-10 21:03:11 +02:00
|
|
|
iter.p.Diag().Errorf(
|
|
|
|
diag.GetResourceInvalidError(urn), new.Type, urn.Name(), failure.Reason)
|
2017-08-01 03:26:15 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return true
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
|
2017-11-29 20:27:32 +01:00
|
|
|
func (iter *PlanIterator) registerResourceOutputs(e RegisterResourceOutputsEvent) error {
|
2017-11-21 02:38:09 +01:00
|
|
|
// Look up the final state in the pending registration list.
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
urn := e.URN()
|
2017-12-10 17:37:22 +01:00
|
|
|
reg, has := iter.pendingNews[urn]
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
contract.Assertf(has, "cannot complete a resource '%v' whose registration isn't pending", urn)
|
2017-11-30 00:05:58 +01:00
|
|
|
contract.Assertf(reg != nil, "expected a non-nil resource step ('%v')", urn)
|
2017-12-10 17:37:22 +01:00
|
|
|
delete(iter.pendingNews, urn)
|
2017-11-21 02:38:09 +01:00
|
|
|
|
2017-12-05 22:01:54 +01:00
|
|
|
// Unconditionally set the resource's outputs to what was provided. This intentionally overwrites whatever
|
|
|
|
// might already be there, since otherwise "deleting" outputs would have no affect.
|
|
|
|
outs := e.Outputs()
|
|
|
|
glog.V(7).Infof("Registered resource outputs %s: old=#%d, new=#%d", urn, len(reg.New().Outputs), len(outs))
|
|
|
|
reg.New().Outputs = e.Outputs()
|
2017-11-21 02:38:09 +01:00
|
|
|
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
// If there is an event subscription for finishing the resource, execute them.
|
|
|
|
if e := iter.opts.Events; e != nil {
|
2017-11-30 00:05:58 +01:00
|
|
|
if eventerr := e.OnResourceOutputs(reg); eventerr != nil {
|
2018-02-02 05:23:26 +01:00
|
|
|
return errors.Wrapf(eventerr, "resource complete event returned an error")
|
Bring back component outputs
This change brings back component outputs to the overall system again.
In doing so, it generally overhauls the way we do resource RPCs a bit:
* Instead of RegisterResource and CompleteResource, we call these
BeginRegisterResource and EndRegisterResource, which begins to model
these as effectively "asynchronous" resource requests. This should also
help with parallelism (https://github.com/pulumi/pulumi/issues/106).
* Flip the CLI/engine a little on its head. Rather than it driving the
planning and deployment process, we move more to a model where it
simply observes it. This is done by implementing an event handler
interface with three events: OnResourceStepPre, OnResourceStepPost,
and OnResourceComplete. The first two are invoked immediately before
and after any step operation, and the latter is invoked whenever a
EndRegisterResource comes in. The reason for the asymmetry here is
that the checkpointing logic in the deployment engine is largely
untouched (intentionally, as this is a sensitive part of the system),
and so the "begin"/"end" nature doesn't flow through faithfully.
* Also make the engine more event-oriented in its terminology and the
way it handles the incoming BeginRegisterResource and
EndRegisterResource events from the language host. This is the first
step down a long road of incrementally refactoring the engine to work
this way, a necessary prerequisite for parallelism.
2017-11-29 16:42:14 +01:00
|
|
|
}
|
|
|
|
}
|
2017-11-21 02:38:09 +01:00
|
|
|
|
2017-11-29 20:27:32 +01:00
|
|
|
// Finally, let the language provider know that we're done processing the event.
|
|
|
|
e.Done()
|
2017-06-10 03:34:37 +02:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-11-21 02:38:09 +01:00
|
|
|
// computeDeletes creates a list of deletes to perform. This will include any resources in the snapshot that were
|
2017-06-10 03:34:37 +02:00
|
|
|
// not encountered in the input, along with any resources that were replaced.
|
2017-12-10 17:37:22 +01:00
|
|
|
func (iter *PlanIterator) computeDeletes() []Step {
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
// To compute the deletion list, we must walk the list of old resources *backwards*. This is because the list is
|
|
|
|
// stored in dependency order, and earlier elements are possibly leaf nodes for later elements. We must not delete
|
|
|
|
// dependencies prior to their dependent nodes.
|
2017-12-10 17:37:22 +01:00
|
|
|
var dels []Step
|
2017-06-11 15:52:56 +02:00
|
|
|
if prev := iter.p.prev; prev != nil {
|
|
|
|
for i := len(prev.Resources) - 1; i >= 0; i-- {
|
2017-12-10 17:37:22 +01:00
|
|
|
// If this resource is explicitly marked for deletion or wasn't seen at all, delete it.
|
2017-06-11 15:52:56 +02:00
|
|
|
res := prev.Resources[i]
|
2018-04-17 08:04:56 +02:00
|
|
|
if res.Delete {
|
2017-12-10 17:37:22 +01:00
|
|
|
glog.V(7).Infof("Planner decided to delete '%v' due to replacement", res.URN)
|
2017-12-11 18:22:04 +01:00
|
|
|
iter.deletes[res.URN] = true
|
2018-04-17 08:04:56 +02:00
|
|
|
dels = append(dels, NewDeleteReplacementStep(iter, res, true))
|
2017-12-10 17:37:22 +01:00
|
|
|
} else if !iter.sames[res.URN] && !iter.updates[res.URN] && !iter.replaces[res.URN] {
|
|
|
|
glog.V(7).Infof("Planner decided to delete '%v'", res.URN)
|
2017-12-11 18:22:04 +01:00
|
|
|
iter.deletes[res.URN] = true
|
2018-04-17 08:04:56 +02:00
|
|
|
dels = append(dels, NewDeleteStep(iter, res))
|
2017-06-11 15:52:56 +02:00
|
|
|
}
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return dels
|
|
|
|
}
|
|
|
|
|
2017-11-21 02:38:09 +01:00
|
|
|
// nextDeleteStep produces a new step that deletes a resource if necessary.
|
|
|
|
func (iter *PlanIterator) nextDeleteStep() Step {
|
|
|
|
if len(iter.delqueue) > 0 {
|
|
|
|
del := iter.delqueue[0]
|
|
|
|
iter.delqueue = iter.delqueue[1:]
|
2017-12-10 17:37:22 +01:00
|
|
|
return del
|
2017-11-21 02:38:09 +01:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-04-17 08:04:56 +02:00
|
|
|
// Snap returns a fresh snapshot that takes into account everything that has happened up till this point. Namely, if a
|
|
|
|
// failure happens partway through, the untouched snapshot elements will be retained, while any updates will be
|
|
|
|
// preserved. If no failure happens, the snapshot naturally reflects the final state of all resources.
|
|
|
|
func (iter *PlanIterator) Snap() *Snapshot {
|
|
|
|
// At this point we have two resource DAGs. One of these is the base DAG for this plan; the other is the current DAG
|
|
|
|
// for this plan. Any resource r may be present in both DAGs. In order to produce a snapshot, we need to merge these
|
|
|
|
// DAGs such that all resource dependencies are correctly preserved. Conceptually, the merge proceeds as follows:
|
|
|
|
//
|
|
|
|
// - Begin with an empty merged DAG.
|
|
|
|
// - For each resource r in the current DAG, insert r and its outgoing edges into the merged DAG.
|
|
|
|
// - For each resource r in the base DAG:
|
|
|
|
// - If r is in the merged DAG, we are done: if the resource is in the merged DAG, it must have been in the
|
|
|
|
// current DAG, which accurately captures its current dependencies.
|
|
|
|
// - If r is not in the merged DAG, insert it and its outgoing edges into the merged DAG.
|
|
|
|
//
|
|
|
|
// Physically, however, each DAG is represented as list of resources without explicit dependency edges. In place of
|
|
|
|
// edges, it is assumed that the list represents a valid topological sort of its source DAG. Thus, any resource r at
|
|
|
|
// index i in a list L must be assumed to be dependent on all resources in L with index j s.t. j < i. Due to this
|
|
|
|
// representation, we implement the algorithm above as follows to produce a merged list that represents a valid
|
|
|
|
// topological sort of the merged DAG:
|
|
|
|
//
|
|
|
|
// - Begin with an empty merged list.
|
|
|
|
// - For each resource r in the current list, append r to the merged list. r must be in a correct location in the
|
|
|
|
// merged list, as its position relative to its assumed dependencies has not changed.
|
|
|
|
// - For each resource r in the base list:
|
|
|
|
// - If r is in the merged list, we are done by the logic given in the original algorithm.
|
|
|
|
// - If r is not in the merged list, append r to the merged list. r must be in a correct location in the merged
|
|
|
|
// list:
|
|
|
|
// - If any of r's dependencies were in the current list, they must already be in the merged list and their
|
|
|
|
// relative order w.r.t. r has not changed.
|
|
|
|
// - If any of r's dependencies were not in the current list, they must already be in the merged list, as
|
|
|
|
// they would have been appended to the list before r.
|
|
|
|
|
|
|
|
// Start with a copy of the resources produced during the evaluation of the current plan.
|
|
|
|
resources := make([]*resource.State, len(iter.resources))
|
|
|
|
copy(resources, iter.resources)
|
|
|
|
|
|
|
|
// If the plan has not finished executing, append any resources from the base plan that were not produced by the
|
|
|
|
// current plan.
|
|
|
|
if !iter.done {
|
|
|
|
if prev := iter.p.prev; prev != nil {
|
|
|
|
for _, res := range prev.Resources {
|
|
|
|
if !iter.dones[res] {
|
|
|
|
resources = append(resources, res)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now produce a manifest and snapshot.
|
|
|
|
v, plugs := iter.SnapVersions()
|
|
|
|
manifest := Manifest{
|
|
|
|
Time: time.Now(),
|
|
|
|
Version: v,
|
|
|
|
Plugins: plugs,
|
|
|
|
}
|
|
|
|
manifest.Magic = manifest.NewMagic()
|
|
|
|
return NewSnapshot(iter.p.Target().Name, manifest, resources)
|
|
|
|
}
|
|
|
|
|
|
|
|
// SnapVersions returns all versions used in the generation of this snapshot. Note that no attempt is made to
|
|
|
|
// "merge" with old version information. So, if a checkpoint doesn't end up loading all of the possible plugins
|
|
|
|
// it could ever load -- e.g., due to a failure -- there will be some resources in the checkpoint snapshot that
|
|
|
|
// were loaded by plugins that never got loaded this time around. In other words, this list is not stable.
|
|
|
|
func (iter *PlanIterator) SnapVersions() (string, []workspace.PluginInfo) {
|
|
|
|
return version.Version, iter.p.ctx.Host.ListPlugins()
|
|
|
|
}
|
|
|
|
|
|
|
|
// MarkStateSnapshot marks an old state snapshot as being processed. This is done to recover from failures partway
|
|
|
|
// through the application of a deployment plan. Any old state that has not yet been recovered needs to be kept.
|
|
|
|
func (iter *PlanIterator) MarkStateSnapshot(state *resource.State) {
|
|
|
|
contract.Assert(state != nil)
|
|
|
|
iter.dones[state] = true
|
|
|
|
glog.V(9).Infof("Marked old state snapshot as done: %v", state.URN)
|
|
|
|
}
|
|
|
|
|
|
|
|
// AppendStateSnapshot appends a resource's state to the current snapshot.
|
|
|
|
func (iter *PlanIterator) AppendStateSnapshot(state *resource.State) {
|
|
|
|
contract.Assert(state != nil)
|
|
|
|
iter.resources = append(iter.resources, state)
|
|
|
|
glog.V(9).Infof("Appended new state snapshot to be written: %v", state.URN)
|
|
|
|
}
|
|
|
|
|
2017-08-30 03:24:12 +02:00
|
|
|
// Provider fetches the provider for a given resource type, possibly lazily allocating the plugins for it. If a
|
|
|
|
// provider could not be found, or an error occurred while creating it, a non-nil error is returned.
|
|
|
|
func (iter *PlanIterator) Provider(t tokens.Type) (plugin.Provider, error) {
|
Make more progress on the new deployment model
This change restructures a lot more pertaining to deployments, snapshots,
environments, and the like.
The most notable change is that the notion of a deploy.Source is introduced,
which splits the responsibility between the deploy.Plan -- which simply
understands how to compute and carry out deployment plans -- and the idea
of something that can produce new objects on-demand during deployment.
The primary such implementation is evalSource, which encapsulates an
interpreter and takes a package, args, and config map, and proceeds to run
the interpreter in a distinct goroutine. It synchronizes as needed to
poke and prod the interpreter along its path to create new resource objects.
There are two other sources, however. First, a nullSource, which simply
refuses to create new objects. This can be handy when writing isolated
tests but is also used to simulate the "empty" environment as necessary to
do a complete teardown of the target environment. Second, a fixedSource,
which takes a pre-computed array of objects, and hands those, in order, to
the planning engine; this is mostly useful as a testing technique.
Boatloads of code is now changed and updated in the various CLI commands.
This further chugs along towards pulumi/lumi#90. The end is in sight.
2017-06-10 20:50:47 +02:00
|
|
|
pkg := t.Package()
|
2018-02-06 18:57:32 +01:00
|
|
|
prov, err := iter.p.Provider(pkg)
|
2017-08-31 23:31:33 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if prov == nil {
|
2018-02-02 05:23:26 +01:00
|
|
|
return nil, errors.Errorf("could not load resource provider for package '%v' from $PATH", pkg)
|
2017-08-31 23:31:33 +02:00
|
|
|
}
|
|
|
|
return prov, nil
|
2017-06-10 03:34:37 +02:00
|
|
|
}
|