pulumi/pkg/resource/deploy/plan.go
pat@pulumi.com 6b66437fae Track resources that are pending deletion in checkpoints.
During the course of a `pulumi update`, it is possible for a resource to
become slated for deletion. In the case that this deletion is part of a
replacement, another resource with the same URN as the to-be-deleted
resource will have been created earlier. If the `update` fails after the
replacement resource is created but before the original resource has been
deleted, the snapshot must capture that the original resource still exists
and should be deleted in a future update without losing track of the order
in which the deletion must occur relative to other deletes. Currently, we
are unable to track this information because the our checkpoints require
that no two resources have the same URN.

To fix this, these changes introduce to the update engine the notion of a
resource that is pending deletion and change checkpoint serialization to
use an array of resources rather than a map. The meaning of the former is
straightforward: a resource that is pending deletion should be deleted
during the next update.

This is a fairly major breaking change to our checkpoint files, as the
map of resources is no more. Happily, though, it makes our checkpoint
files a bit more "obvious" to any tooling that might want to grovel
or rewrite them.

Fixes #432, #387.
2017-10-18 17:09:00 -07:00

78 lines
3.5 KiB
Go

// Copyright 2016-2017, Pulumi Corporation. All rights reserved.
package deploy
import (
"github.com/pulumi/pulumi/pkg/diag"
"github.com/pulumi/pulumi/pkg/resource"
"github.com/pulumi/pulumi/pkg/resource/plugin"
"github.com/pulumi/pulumi/pkg/tokens"
"github.com/pulumi/pulumi/pkg/util/contract"
)
// TODO[pulumi/pulumi#106]: plan parallelism.
// Plan is the output of analyzing resource graphs and contains the steps necessary to perform an infrastructure
// deployment. A plan can be generated out of whole cloth from a resource graph -- in the case of new deployments --
// however, it can alternatively be generated by diffing two resource graphs -- in the case of updates to existing
// stacks (presumably more common). The plan contains step objects that can be used to drive a deployment.
type Plan struct {
ctx *plugin.Context // the plugin context (for provider operations).
target *Target // the deployment target.
prev *Snapshot // the old resource snapshot for comparison.
olds map[resource.URN]*resource.State // a map of all old resources.
source Source // the source of new resources.
analyzers []tokens.QName // the analyzers to run during this plan's generation.
}
// NewPlan creates a new deployment plan from a resource snapshot plus a package to evaluate.
//
// From the old and new states, it understands how to orchestrate an evaluation and analyze the resulting resources.
// The plan may be used to simply inspect a series of operations, or actually perform them; these operations are
// generated based on analysis of the old and new states. If a resource exists in new, but not old, for example, it
// results in a create; if it exists in both, but is different, it results in an update; and so on and so forth.
//
// Note that a plan uses internal concurrency and parallelism in various ways, so it must be closed if for some reason
// a plan isn't carried out to its final conclusion. This will result in cancelation and reclamation of OS resources.
func NewPlan(ctx *plugin.Context, target *Target, prev *Snapshot, source Source, analyzers []tokens.QName) *Plan {
contract.Assert(ctx != nil)
contract.Assert(target != nil)
contract.Assert(source != nil)
// Produce a map of all old resources for fast resources.
olds := make(map[resource.URN]*resource.State)
if prev != nil {
for _, oldres := range prev.Resources {
// Ignore resources that are pending deletion; these should not be recorded in the LUT.
if oldres.Delete {
continue
}
urn := oldres.URN
contract.Assert(olds[urn] == nil)
olds[urn] = oldres
}
}
return &Plan{
ctx: ctx,
target: target,
prev: prev,
olds: olds,
source: source,
analyzers: analyzers,
}
}
func (p *Plan) Ctx() *plugin.Context { return p.ctx }
func (p *Plan) Target() *Target { return p.target }
func (p *Plan) Diag() diag.Sink { return p.ctx.Diag }
func (p *Plan) Prev() *Snapshot { return p.prev }
func (p *Plan) Olds() map[resource.URN]*resource.State { return p.olds }
func (p *Plan) Source() Source { return p.source }
// Provider fetches the provider for a given resource type, possibly lazily allocating the plugins for it. If a
// provider could not be found, or an error occurred while creating it, a non-nil error is returned.
func (p *Plan) Provider(pkg tokens.Package) (plugin.Provider, error) {
return p.ctx.Host.Provider(pkg)
}