2018-03-21 20:43:21 +01:00
|
|
|
// Copyright 2016-2018, Pulumi Corporation. All rights reserved.
|
2017-11-01 22:55:16 +01:00
|
|
|
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
package cloud
|
2017-11-01 22:55:16 +01:00
|
|
|
|
|
|
|
import (
|
2018-04-14 07:26:01 +02:00
|
|
|
"bytes"
|
2017-12-26 18:39:49 +01:00
|
|
|
"context"
|
2017-12-22 16:38:21 +01:00
|
|
|
"encoding/base64"
|
2017-11-01 22:55:16 +01:00
|
|
|
"fmt"
|
2017-12-03 00:17:59 +01:00
|
|
|
"io"
|
2017-11-15 22:27:28 +01:00
|
|
|
"io/ioutil"
|
2018-03-28 21:47:12 +02:00
|
|
|
"net/http"
|
2017-11-01 22:55:16 +01:00
|
|
|
"os"
|
2018-03-30 18:21:55 +02:00
|
|
|
"path"
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
"runtime"
|
2018-03-30 18:21:55 +02:00
|
|
|
"strconv"
|
2018-02-16 03:22:17 +01:00
|
|
|
"strings"
|
2017-11-01 22:55:16 +01:00
|
|
|
"time"
|
|
|
|
|
2018-03-30 07:24:26 +02:00
|
|
|
"github.com/cheggaaa/pb"
|
2018-03-22 18:42:43 +01:00
|
|
|
"github.com/hashicorp/go-multierror"
|
2018-05-08 03:23:03 +02:00
|
|
|
"github.com/opentracing/opentracing-go"
|
2017-11-01 22:55:16 +01:00
|
|
|
"github.com/pkg/errors"
|
2018-05-05 20:57:09 +02:00
|
|
|
survey "gopkg.in/AlecAivazis/survey.v1"
|
|
|
|
surveycore "gopkg.in/AlecAivazis/survey.v1/core"
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
|
2018-01-11 00:04:55 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/apitype"
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend"
|
2018-03-21 18:33:34 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend/cloud/client"
|
2018-03-27 23:28:35 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/backend/local"
|
2017-12-13 19:46:54 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/diag"
|
2017-11-01 22:55:16 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/diag/colors"
|
|
|
|
"github.com/pulumi/pulumi/pkg/engine"
|
2017-11-20 07:28:49 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/operations"
|
2018-04-14 07:26:01 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource"
|
2017-12-22 16:38:21 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource/config"
|
2018-01-25 03:22:41 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/resource/deploy"
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/tokens"
|
2017-11-01 22:55:16 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/archive"
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/cmdutil"
|
2017-11-15 22:27:28 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/contract"
|
2018-05-16 00:28:00 +02:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/logging"
|
2017-12-26 18:39:49 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/util/retry"
|
2017-11-01 22:55:16 +01:00
|
|
|
"github.com/pulumi/pulumi/pkg/workspace"
|
|
|
|
)
|
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
const (
|
2018-04-05 00:31:01 +02:00
|
|
|
// PulumiCloudURL is the Cloud URL used if no environment or explicit cloud is chosen.
|
|
|
|
PulumiCloudURL = "https://" + defaultAPIURLPrefix + "pulumi.com"
|
2018-03-30 18:21:55 +02:00
|
|
|
// defaultAPIURLPrefix is the assumed Cloud URL prefix for typical Pulumi Cloud API endpoints.
|
|
|
|
defaultAPIURLPrefix = "api."
|
2018-03-21 18:33:34 +01:00
|
|
|
// defaultAPIEnvVar can be set to override the default cloud chosen, if `--cloud` is not present.
|
|
|
|
defaultURLEnvVar = "PULUMI_API"
|
|
|
|
// AccessTokenEnvVar is the environment variable used to bypass a prompt on login.
|
|
|
|
AccessTokenEnvVar = "PULUMI_ACCESS_TOKEN"
|
|
|
|
)
|
|
|
|
|
|
|
|
// DefaultURL returns the default cloud URL. This may be overridden using the PULUMI_API environment
|
2018-04-05 00:31:01 +02:00
|
|
|
// variable. If no override is found, and we are authenticated with a cloud, choose that. Otherwise,
|
2018-03-21 18:33:34 +01:00
|
|
|
// we will default to the https://api.pulumi.com/ endpoint.
|
|
|
|
func DefaultURL() string {
|
|
|
|
return ValueOrDefaultURL("")
|
|
|
|
}
|
|
|
|
|
|
|
|
// ValueOrDefaultURL returns the value if specified, or the default cloud URL otherwise.
|
|
|
|
func ValueOrDefaultURL(cloudURL string) string {
|
|
|
|
// If we have a cloud URL, just return it.
|
|
|
|
if cloudURL != "" {
|
|
|
|
return cloudURL
|
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, respect the PULUMI_API override.
|
|
|
|
if cloudURL := os.Getenv(defaultURLEnvVar); cloudURL != "" {
|
|
|
|
return cloudURL
|
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
// If that didn't work, see if we have a current cloud, and use that. Note we need to be careful
|
|
|
|
// to ignore the local cloud.
|
2018-04-27 23:47:16 +02:00
|
|
|
if creds, err := workspace.GetStoredCredentials(); err == nil {
|
2018-04-05 00:31:01 +02:00
|
|
|
if creds.Current != "" && !local.IsLocalBackendURL(creds.Current) {
|
|
|
|
return creds.Current
|
2018-03-21 18:33:34 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If none of those led to a cloud URL, simply return the default.
|
2018-04-05 00:31:01 +02:00
|
|
|
return PulumiCloudURL
|
2018-03-21 18:33:34 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// barCloser is an implementation of io.Closer that finishes a progress bar upon Close() as well as closing its
|
|
|
|
// underlying readCloser.
|
|
|
|
type barCloser struct {
|
|
|
|
bar *pb.ProgressBar
|
|
|
|
readCloser io.ReadCloser
|
|
|
|
}
|
|
|
|
|
|
|
|
func (bc *barCloser) Read(dest []byte) (int, error) {
|
|
|
|
return bc.readCloser.Read(dest)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (bc *barCloser) Close() error {
|
|
|
|
bc.bar.Finish()
|
|
|
|
return bc.readCloser.Close()
|
|
|
|
}
|
|
|
|
|
|
|
|
func newBarProxyReadCloser(bar *pb.ProgressBar, r io.Reader) io.ReadCloser {
|
|
|
|
return &barCloser{
|
|
|
|
bar: bar,
|
|
|
|
readCloser: bar.NewProxyReader(r),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
// Backend extends the base backend interface with specific information about cloud backends.
|
|
|
|
type Backend interface {
|
|
|
|
backend.Backend
|
2018-04-05 00:31:01 +02:00
|
|
|
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
CloudURL() string
|
2018-04-05 00:31:01 +02:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
DownloadPlugin(ctx context.Context, info workspace.PluginInfo, progress bool) (io.ReadCloser, error)
|
|
|
|
DownloadTemplate(ctx context.Context, name string, progress bool) (io.ReadCloser, error)
|
|
|
|
ListTemplates(ctx context.Context) ([]workspace.Template, error)
|
2018-04-19 19:09:32 +02:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
CancelCurrentUpdate(ctx context.Context, stackRef backend.StackReference) error
|
2018-04-20 08:54:33 +02:00
|
|
|
StackConsoleURL(stackRef backend.StackReference) (string, error)
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
}
|
|
|
|
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
type cloudBackend struct {
|
2018-03-21 18:33:34 +01:00
|
|
|
d diag.Sink
|
2018-04-05 00:31:01 +02:00
|
|
|
url string
|
2018-03-21 18:33:34 +01:00
|
|
|
client *client.Client
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
// New creates a new Pulumi backend for the given cloud API URL and token.
|
2018-04-05 00:31:01 +02:00
|
|
|
func New(d diag.Sink, cloudURL string) (Backend, error) {
|
|
|
|
cloudURL = ValueOrDefaultURL(cloudURL)
|
|
|
|
apiToken, err := workspace.GetAccessToken(cloudURL)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrap(err, "getting stored credentials")
|
|
|
|
}
|
|
|
|
|
|
|
|
return &cloudBackend{
|
|
|
|
d: d,
|
2018-04-05 00:31:01 +02:00
|
|
|
url: cloudURL,
|
|
|
|
client: client.NewClient(cloudURL, apiToken),
|
2018-03-21 18:33:34 +01:00
|
|
|
}, nil
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
// Login logs into the target cloud URL and returns the cloud backend for it.
|
2018-05-08 03:23:03 +02:00
|
|
|
func Login(ctx context.Context, d diag.Sink, cloudURL string) (Backend, error) {
|
2018-04-05 00:31:01 +02:00
|
|
|
cloudURL = ValueOrDefaultURL(cloudURL)
|
|
|
|
|
|
|
|
// If we have a saved access token, and it is valid, use it.
|
|
|
|
existingToken, err := workspace.GetAccessToken(cloudURL)
|
|
|
|
if err == nil && existingToken != "" {
|
2018-05-08 03:23:03 +02:00
|
|
|
if valid, _ := IsValidAccessToken(ctx, cloudURL, existingToken); valid {
|
2018-04-27 23:47:16 +02:00
|
|
|
// Save the token. While it hasn't changed this will update the current cloud we are logged into, as well.
|
|
|
|
if err = workspace.StoreAccessToken(cloudURL, existingToken, true); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
return New(d, cloudURL)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// We intentionally don't accept command-line args for the user's access token. Having it in
|
|
|
|
// .bash_history is not great, and specifying it via flag isn't of much use.
|
|
|
|
accessToken := os.Getenv(AccessTokenEnvVar)
|
|
|
|
if accessToken != "" {
|
|
|
|
fmt.Printf("Using access token from %s\n", AccessTokenEnvVar)
|
|
|
|
} else {
|
2018-04-17 02:43:37 +02:00
|
|
|
token, readerr := cmdutil.ReadConsoleNoEcho(
|
2018-05-08 03:23:03 +02:00
|
|
|
fmt.Sprintf("Enter your Pulumi access token from %s", cloudConsoleURL(cloudURL, "account")))
|
2018-04-05 00:31:01 +02:00
|
|
|
if readerr != nil {
|
|
|
|
return nil, readerr
|
|
|
|
}
|
|
|
|
accessToken = token
|
|
|
|
}
|
|
|
|
|
|
|
|
// Try and use the credentials to see if they are valid.
|
2018-05-08 03:23:03 +02:00
|
|
|
valid, err := IsValidAccessToken(ctx, cloudURL, accessToken)
|
2018-04-05 00:31:01 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if !valid {
|
|
|
|
return nil, errors.Errorf("invalid access token")
|
|
|
|
}
|
|
|
|
|
|
|
|
// Save them.
|
|
|
|
if err = workspace.StoreAccessToken(cloudURL, accessToken, true); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return New(d, cloudURL)
|
|
|
|
}
|
|
|
|
|
2018-04-20 08:54:33 +02:00
|
|
|
func (b *cloudBackend) StackConsoleURL(stackRef backend.StackReference) (string, error) {
|
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
|
|
|
if err != nil {
|
|
|
|
return "", err
|
|
|
|
}
|
|
|
|
|
|
|
|
return b.cloudConsoleStackPath(stackID), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (b *cloudBackend) Name() string {
|
|
|
|
if b.url == PulumiCloudURL {
|
|
|
|
return "pulumi.com"
|
|
|
|
}
|
|
|
|
|
|
|
|
return b.url
|
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
func (b *cloudBackend) CloudURL() string { return b.url }
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
|
2018-04-20 08:16:07 +02:00
|
|
|
func (b *cloudBackend) ParseStackReference(s string) (backend.StackReference, error) {
|
2018-04-18 12:19:13 +02:00
|
|
|
split := strings.Split(s, "/")
|
2018-04-20 08:16:07 +02:00
|
|
|
var owner string
|
2018-04-18 20:25:16 +02:00
|
|
|
var stackName string
|
2018-04-18 12:19:13 +02:00
|
|
|
|
|
|
|
if len(split) == 1 {
|
2018-04-18 20:25:16 +02:00
|
|
|
stackName = split[0]
|
2018-04-18 12:19:13 +02:00
|
|
|
} else if len(split) == 2 {
|
|
|
|
owner = split[0]
|
2018-04-18 20:25:16 +02:00
|
|
|
stackName = split[1]
|
2018-04-18 12:19:13 +02:00
|
|
|
} else {
|
|
|
|
return nil, errors.Errorf("could not parse stack name '%s'", s)
|
|
|
|
}
|
|
|
|
|
|
|
|
if owner == "" {
|
2018-05-08 03:23:03 +02:00
|
|
|
currentUser, userErr := b.client.GetPulumiAccountName(context.Background())
|
2018-04-18 12:19:13 +02:00
|
|
|
if userErr != nil {
|
|
|
|
return nil, userErr
|
|
|
|
}
|
|
|
|
owner = currentUser
|
|
|
|
}
|
|
|
|
|
|
|
|
return cloudBackendReference{
|
|
|
|
owner: owner,
|
2018-04-18 20:25:16 +02:00
|
|
|
name: tokens.QName(stackName),
|
|
|
|
b: b,
|
|
|
|
}, nil
|
2018-04-18 01:37:52 +02:00
|
|
|
}
|
|
|
|
|
2018-03-30 18:21:55 +02:00
|
|
|
// CloudConsoleURL returns a link to the cloud console with the given path elements. If a console link cannot be
|
|
|
|
// created, we return the empty string instead (this can happen if the endpoint isn't a recognized pattern).
|
|
|
|
func (b *cloudBackend) CloudConsoleURL(paths ...string) string {
|
2018-04-05 00:31:01 +02:00
|
|
|
return cloudConsoleURL(b.CloudURL(), paths...)
|
|
|
|
}
|
|
|
|
|
|
|
|
func cloudConsoleURL(cloudURL string, paths ...string) string {
|
2018-03-30 18:21:55 +02:00
|
|
|
// To produce a cloud console URL, we assume that the URL is of the form `api.xx.yy`, and simply strip off the
|
|
|
|
// `api.` part. If that is not the case, we will return an empty string because we don't recognize the pattern.
|
2018-04-05 00:31:01 +02:00
|
|
|
ix := strings.Index(cloudURL, defaultAPIURLPrefix)
|
2018-03-30 18:21:55 +02:00
|
|
|
if ix == -1 {
|
|
|
|
return ""
|
|
|
|
}
|
2018-04-05 00:31:01 +02:00
|
|
|
return cloudURL[:ix] + path.Join(append([]string{cloudURL[ix+len(defaultAPIURLPrefix):]}, paths...)...)
|
2018-03-30 18:21:55 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// CloudConsoleStackPath returns the stack path components for getting to a stack in the cloud console. This path
|
|
|
|
// must, of coursee, be combined with the actual console base URL by way of the CloudConsoleURL function above.
|
2018-04-18 12:19:13 +02:00
|
|
|
func (b *cloudBackend) cloudConsoleStackPath(stackID client.StackIdentifier) string {
|
2018-05-22 01:17:12 +02:00
|
|
|
return path.Join(stackID.Owner, stackID.Stack)
|
2018-03-30 18:21:55 +02:00
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
// Logout logs out of the target cloud URL.
|
|
|
|
func (b *cloudBackend) Logout() error {
|
|
|
|
return workspace.DeleteAccessToken(b.CloudURL())
|
|
|
|
}
|
|
|
|
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
// DownloadPlugin downloads a plugin as a tarball from the release endpoint. The returned reader is a stream
|
|
|
|
// that reads the tar.gz file, which should be expanded and closed after the download completes. If progress
|
|
|
|
// is true, the download will display a progress bar using stdout.
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) DownloadPlugin(ctx context.Context, info workspace.PluginInfo,
|
|
|
|
progress bool) (io.ReadCloser, error) {
|
|
|
|
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
// Figure out the OS/ARCH pair for the download URL.
|
|
|
|
var os string
|
|
|
|
switch runtime.GOOS {
|
|
|
|
case "darwin", "linux", "windows":
|
|
|
|
os = runtime.GOOS
|
|
|
|
default:
|
|
|
|
return nil, errors.Errorf("unsupported plugin OS: %s", runtime.GOOS)
|
|
|
|
}
|
|
|
|
var arch string
|
|
|
|
switch runtime.GOARCH {
|
|
|
|
case "amd64":
|
2018-02-17 20:43:14 +01:00
|
|
|
arch = runtime.GOARCH
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
default:
|
|
|
|
return nil, errors.Errorf("unsupported plugin architecture: %s", runtime.GOARCH)
|
|
|
|
}
|
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
// Now make the client request.
|
2018-05-08 03:23:03 +02:00
|
|
|
result, size, err := b.client.DownloadPlugin(ctx, info, os, arch)
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrapf(err, "failed to download plugin")
|
|
|
|
}
|
|
|
|
|
|
|
|
// If progress is requested, and we know the length, show a little animated ASCII progress bar.
|
2018-03-21 18:33:34 +01:00
|
|
|
if progress && size != -1 {
|
|
|
|
bar := pb.New(int(size))
|
|
|
|
result = newBarProxyReadCloser(bar, result)
|
Implement basic plugin management
This change implements basic plugin management, but we do not yet
actually use the plugins for anything (that comes next).
Plugins are stored in `~/.pulumi/plugins`, and are expected to be
in the format `pulumi-<KIND>-<NAME>-v<VERSION>[.exe]`. The KIND is
one of `analyzer`, `language`, or `resource`, the NAME is a hyphen-
delimited name (e.g., `aws` or `foo-bar`), and VERSION is the
plugin's semantic version (e.g., `0.9.11`, `1.3.7-beta.a736cf`, etc).
This commit includes four new CLI commands:
* `pulumi plugin` is the top-level plugin command. It does nothing
but show the help text for associated child commands.
* `pulumi plugin install` can be used to install plugins manually.
If run with no additional arguments, it will compute the set of
plugins used by the current project, and download them all. It
may be run to explicitly download a single plugin, however, by
invoking it as `pulumi plugin install KIND NAME VERSION`. For
example, `pulumi plugin install resource aws v0.9.11`. By default,
this command uses the cloud backend in the usual way to perform the
download, although a separate URL may be given with --cloud-url,
just like all other commands that interact with our backend service.
* `pulumi plugin ls` lists all plugins currently installed in the
plugin cache. It displays some useful statistics, like the size
of the plugin, when it was installed, when it was last used, and
so on. It sorts the display alphabetically by plugin name, and
for plugins with multiple versions, it shows the newest at the top.
The command also summarizes how much disk space is currently being
consumed by the plugin cache. There are no filtering capabilities yet.
* `pulumi plugin prune` will delete plugins from the cache. By
default, when run with no arguments, it will delete everything.
It may be run with additional arguments, KIND, NAME, and VERSION,
each one getting more specific about what it will delete. For
instance, `pulumi plugin prune resource aws` will delete all AWS
plugin versions, while `pulumi plugin prune resource aws <0.9`
will delete all AWS plugins before version 0.9. Unless --yes is
passed, the command will confirm the deletion with a count of how
many plugins will be affected by the command.
We do not yet actually download plugins on demand yet. That will
come in a subsequent change.
2018-02-04 19:51:29 +01:00
|
|
|
bar.Prefix(colors.ColorizeText(colors.SpecUnimportant + "Downloading plugin: "))
|
|
|
|
bar.Postfix(colors.ColorizeText(colors.Reset))
|
|
|
|
bar.SetMaxWidth(80)
|
|
|
|
bar.SetUnits(pb.U_BYTES)
|
|
|
|
bar.Start()
|
|
|
|
}
|
|
|
|
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) ListTemplates(ctx context.Context) ([]workspace.Template, error) {
|
|
|
|
return b.client.ListTemplates(ctx)
|
2018-03-10 00:27:55 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) DownloadTemplate(ctx context.Context, name string, progress bool) (io.ReadCloser, error) {
|
|
|
|
result, size, err := b.client.DownloadTemplate(ctx, name)
|
2018-03-10 00:27:55 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrap(err, "failed to download template")
|
|
|
|
}
|
|
|
|
|
|
|
|
// If progress is requested, and we know the length, show a little animated ASCII progress bar.
|
2018-03-21 18:33:34 +01:00
|
|
|
if progress && size != -1 {
|
|
|
|
bar := pb.New(int(size))
|
|
|
|
result = newBarProxyReadCloser(bar, result)
|
2018-03-10 00:27:55 +01:00
|
|
|
bar.Prefix(colors.ColorizeText(colors.SpecUnimportant + "Downloading template: "))
|
|
|
|
bar.Postfix(colors.ColorizeText(colors.Reset))
|
|
|
|
bar.SetMaxWidth(80)
|
|
|
|
bar.SetUnits(pb.U_BYTES)
|
|
|
|
bar.Start()
|
|
|
|
}
|
|
|
|
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) GetStack(ctx context.Context, stackRef backend.StackReference) (backend.Stack, error) {
|
2018-04-18 12:19:13 +02:00
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-03-28 21:47:12 +02:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
stack, err := b.client.GetStack(ctx, stackID)
|
2018-03-28 21:47:12 +02:00
|
|
|
if err != nil {
|
|
|
|
// If this was a 404, return nil, nil as per this method's contract.
|
|
|
|
if errResp, ok := err.(*apitype.ErrorResponse); ok && errResp.Code == http.StatusNotFound {
|
|
|
|
return nil, nil
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
2018-03-28 21:47:12 +02:00
|
|
|
return nil, err
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
2018-03-28 21:47:12 +02:00
|
|
|
|
|
|
|
return newStack(stack, b), nil
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
|
|
|
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
// CreateStackOptions is an optional bag of options specific to creating cloud stacks.
|
|
|
|
type CreateStackOptions struct {
|
|
|
|
// CloudName is the optional PPC name to create the stack in. If omitted, the organization's default PPC is used.
|
|
|
|
CloudName string
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) CreateStack(ctx context.Context, stackRef backend.StackReference,
|
|
|
|
opts interface{}) (backend.Stack, error) {
|
|
|
|
|
2018-04-23 21:37:29 +02:00
|
|
|
if opts == nil {
|
|
|
|
opts = CreateStackOptions{}
|
|
|
|
}
|
|
|
|
|
2018-04-18 12:19:13 +02:00
|
|
|
cloudOpts, ok := opts.(CreateStackOptions)
|
|
|
|
if !ok {
|
|
|
|
return nil, errors.New("expected a CloudStackOptions value for opts parameter")
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-05-22 01:17:12 +02:00
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
2018-04-18 12:19:13 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
}
|
|
|
|
|
2018-04-11 19:08:32 +02:00
|
|
|
tags, err := backend.GetStackTags()
|
2018-04-09 18:31:46 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrap(err, "error determining initial tags")
|
|
|
|
}
|
|
|
|
|
2018-05-22 01:17:12 +02:00
|
|
|
apistack, err := b.client.CreateStack(ctx, stackID, cloudOpts.CloudName, tags)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-05-08 00:31:27 +02:00
|
|
|
// If the status is 409 Conflict (stack already exists), return StackAlreadyExistsError.
|
|
|
|
if errResp, ok := err.(*apitype.ErrorResponse); ok && errResp.Code == http.StatusConflict {
|
2018-05-22 01:17:12 +02:00
|
|
|
return nil, &backend.StackAlreadyExistsError{StackName: stackID.Stack}
|
2018-05-08 00:31:27 +02:00
|
|
|
}
|
Make some stack-related CLI improvements (#947)
This change includes a handful of stack-related CLI formatting
improvements that I've been noodling on in the background for a while,
based on things that tend to trip up demos and the inner loop workflow.
This includes:
* If `pulumi stack select` is run by itself, use an interactive
CLI menu to let the user select an existing stack, or choose to
create a new one. This looks as follows
$ pulumi stack select
Please choose a stack, or choose to create a new one:
abcdef
babblabblabble
> currentlyselected
defcon
<create a new stack>
and is navigated in the usual way (key up, down, enter).
* If a stack name is passed that does not exist, prompt the user
to ask whether s/he wants to create one on-demand. This hooks
interesting moments in time, like `pulumi stack select foo`,
and cuts down on the need to run additional commands.
* If a current stack is required, but none is currently selected,
then pop the same interactive menu shown above to select one.
Depending on the command being run, we may or may not show the
option to create a new stack (e.g., that doesn't make much sense
when you're running `pulumi destroy`, but might when you're
running `pulumi stack`). This again lets you do with a single
command what would have otherwise entailed an error with multiple
commands to recover from it.
* If you run `pulumi stack init` without any additional arguments,
we interactively prompt for the stack name. Before, we would
error and you'd then need to run `pulumi stack init <name>`.
* Colorize some things nicely; for example, now all prompts will
by default become bright white.
2018-02-17 00:03:54 +01:00
|
|
|
return nil, err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-04-20 08:54:33 +02:00
|
|
|
stack := newStack(apistack, b)
|
|
|
|
fmt.Printf("Created stack '%s'", stack.Name())
|
|
|
|
if !stack.RunLocally() {
|
|
|
|
fmt.Printf(" in PPC %s", stack.CloudName())
|
|
|
|
}
|
2018-05-08 00:31:27 +02:00
|
|
|
fmt.Println(".")
|
2018-04-20 08:54:33 +02:00
|
|
|
|
|
|
|
return stack, nil
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) ListStacks(ctx context.Context, projectFilter *tokens.PackageName) ([]backend.Stack, error) {
|
2018-05-22 01:17:12 +02:00
|
|
|
stacks, err := b.client.ListStacks(ctx, projectFilter)
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Map to a summary slice.
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
var results []backend.Stack
|
2017-11-01 22:55:16 +01:00
|
|
|
for _, stack := range stacks {
|
Make some updates based on CR feedback
This change implements some feedback from @ellismg.
* Make backend.Stack an interface and let backends implement it,
enabling dynamic type testing/casting to access information
specific to that backend. For instance, the cloud.Stack conveys
the cloud URL, org name, and PPC name, for each stack.
* Similarly expose specialized backend.Backend interfaces,
local.Backend and cloud.Backend, to convey specific information.
* Redo a bunch of the commands in terms of these.
* Keeping with this theme, turn the CreateStack options into an
opaque interface{}, and let the specific backends expose their
own structures with their own settings (like PPC name in cloud).
* Show both the org and PPC names in the cloud column printed in
the stack ls command, in addition to the Pulumi Cloud URL.
Unrelated, but useful:
* Special case the 401 HTTP response and make a friendly error,
to tell the developer they must use `pulumi login`. This is
better than tossing raw "401: Unauthorized" errors in their face.
* Change the "Updating stack '..' in the Pulumi Cloud" message to
use the correct action verb ("Previewing", "Destroying", etc).
2017-12-03 16:51:18 +01:00
|
|
|
results = append(results, newStack(stack, b))
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
return results, nil
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) RemoveStack(ctx context.Context, stackRef backend.StackReference, force bool) (bool, error) {
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
return false, err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.client.DeleteStack(ctx, stack, force)
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2017-12-22 16:38:21 +01:00
|
|
|
// cloudCrypter is an encrypter/decrypter that uses the Pulumi cloud to encrypt/decrypt a stack's secrets.
|
|
|
|
type cloudCrypter struct {
|
2018-03-21 18:33:34 +01:00
|
|
|
backend *cloudBackend
|
|
|
|
stack client.StackIdentifier
|
2017-12-22 16:38:21 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
func (c *cloudCrypter) EncryptValue(plaintext string) (string, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
ciphertext, err := c.backend.client.EncryptValue(context.Background(), c.stack, []byte(plaintext))
|
2017-12-22 16:38:21 +01:00
|
|
|
if err != nil {
|
|
|
|
return "", err
|
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
return base64.StdEncoding.EncodeToString(ciphertext), nil
|
2017-12-22 16:38:21 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
func (c *cloudCrypter) DecryptValue(cipherstring string) (string, error) {
|
|
|
|
ciphertext, err := base64.StdEncoding.DecodeString(cipherstring)
|
|
|
|
if err != nil {
|
|
|
|
return "", err
|
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
plaintext, err := c.backend.client.DecryptValue(context.Background(), c.stack, ciphertext)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2017-12-22 16:38:21 +01:00
|
|
|
return "", err
|
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
return string(plaintext), nil
|
2017-12-22 16:38:21 +01:00
|
|
|
}
|
|
|
|
|
2018-04-18 01:37:52 +02:00
|
|
|
func (b *cloudBackend) GetStackCrypter(stackRef backend.StackReference) (config.Crypter, error) {
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2017-11-15 22:27:28 +01:00
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
return &cloudCrypter{backend: b, stack: stack}, nil
|
|
|
|
}
|
2017-11-15 22:27:28 +01:00
|
|
|
|
2018-04-26 01:49:58 +02:00
|
|
|
var (
|
|
|
|
updateTextMap = map[string]struct {
|
|
|
|
previewText string
|
|
|
|
text string
|
|
|
|
}{
|
2018-05-05 20:57:09 +02:00
|
|
|
string(client.UpdateKindPreview): {"update of", "Previewing"},
|
2018-04-26 01:49:58 +02:00
|
|
|
string(client.UpdateKindUpdate): {"update of", "Updating"},
|
|
|
|
string(client.UpdateKindRefresh): {"refresh of", "Refreshing"},
|
|
|
|
string(client.UpdateKindDestroy): {"destroy of", "Destroying"},
|
|
|
|
string(client.UpdateKindImport): {"import to", "Importing into"},
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
2018-04-26 01:49:58 +02:00
|
|
|
)
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-04-26 01:49:58 +02:00
|
|
|
func getActionLabel(key string, dryRun bool) string {
|
|
|
|
v := updateTextMap[key]
|
|
|
|
contract.Assert(v.previewText != "")
|
|
|
|
contract.Assert(v.text != "")
|
|
|
|
|
|
|
|
if dryRun {
|
|
|
|
return "Previewing " + v.previewText
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
2018-04-26 01:49:58 +02:00
|
|
|
return v.text
|
2018-02-16 03:22:17 +01:00
|
|
|
}
|
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
type response string
|
|
|
|
|
|
|
|
const (
|
|
|
|
yes response = "yes"
|
|
|
|
no response = "no"
|
|
|
|
details response = "details"
|
|
|
|
)
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func getStack(ctx context.Context, b *cloudBackend, stackRef backend.StackReference) (backend.Stack, error) {
|
|
|
|
stack, err := b.GetStack(ctx, stackRef)
|
2018-04-14 07:26:01 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if stack == nil {
|
|
|
|
return nil, errors.New("stack not found")
|
|
|
|
}
|
2018-01-08 22:01:40 +01:00
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
return stack, nil
|
2017-11-15 22:27:28 +01:00
|
|
|
}
|
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
func createDiff(events []engine.Event, displayOpts backend.DisplayOptions) string {
|
|
|
|
buff := &bytes.Buffer{}
|
|
|
|
|
|
|
|
seen := make(map[resource.URN]engine.StepEventMetadata)
|
|
|
|
displayOpts.SummaryDiff = true
|
|
|
|
|
|
|
|
for _, e := range events {
|
|
|
|
msg := local.RenderDiffEvent(e, seen, displayOpts)
|
|
|
|
if msg != "" {
|
|
|
|
if e.Type == engine.SummaryEvent {
|
|
|
|
msg = "\n" + msg
|
|
|
|
}
|
|
|
|
|
|
|
|
_, err := buff.WriteString(msg)
|
|
|
|
contract.IgnoreError(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return strings.TrimSpace(buff.String())
|
|
|
|
}
|
|
|
|
|
|
|
|
func (b *cloudBackend) PreviewThenPrompt(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx context.Context, updateKind client.UpdateKind, stack backend.Stack, pkg *workspace.Project, root string,
|
2018-05-16 02:14:53 +02:00
|
|
|
m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, bool, error) {
|
2018-04-14 07:26:01 +02:00
|
|
|
|
|
|
|
// create a channel to hear about the update events from the engine. this will be used so that
|
|
|
|
// we can build up the diff display in case the user asks to see the details of the diff
|
2018-05-05 20:57:09 +02:00
|
|
|
eventsChannel := make(chan engine.Event)
|
|
|
|
defer func() {
|
|
|
|
close(eventsChannel)
|
|
|
|
}()
|
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
events := []engine.Event{}
|
2018-05-05 20:57:09 +02:00
|
|
|
go func() {
|
|
|
|
// pull the events from the channel and store them locally
|
|
|
|
for e := range eventsChannel {
|
|
|
|
if e.Type == engine.ResourcePreEvent ||
|
|
|
|
e.Type == engine.ResourceOutputsEvent ||
|
|
|
|
e.Type == engine.SummaryEvent {
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
events = append(events, e)
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
2018-05-05 20:57:09 +02:00
|
|
|
}
|
|
|
|
}()
|
2018-04-14 07:26:01 +02:00
|
|
|
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 23:50:17 +02:00
|
|
|
// Perform the update operations, passing true for dryRun, so that we get a preview.
|
2018-05-16 02:14:53 +02:00
|
|
|
changes, hasChanges := engine.ResourceChanges(nil), true
|
2018-05-05 20:57:09 +02:00
|
|
|
if !opts.SkipPreview {
|
2018-05-16 02:14:53 +02:00
|
|
|
c, err := b.updateStack(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx, updateKind, stack, pkg, root, m, opts, eventsChannel, true /*dryRun*/, scopes)
|
2018-05-05 20:57:09 +02:00
|
|
|
if err != nil {
|
2018-05-16 02:14:53 +02:00
|
|
|
return c, false, err
|
2018-05-05 20:57:09 +02:00
|
|
|
}
|
2018-05-09 05:55:51 +02:00
|
|
|
|
|
|
|
// TODO(ellismg)[pulumi/pulumi#1347]: Work around 1347 by forcing a choice when running a preview against a PPC
|
2018-05-16 02:14:53 +02:00
|
|
|
changes, hasChanges = c, c.HasChanges() || !stack.(Stack).RunLocally()
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
// If there are no changes, or we're auto-approving or just previewing, we can skip the confirmation prompt.
|
|
|
|
if !hasChanges || opts.AutoApprove || updateKind == client.UpdateKindPreview {
|
2018-05-16 02:14:53 +02:00
|
|
|
return changes, hasChanges, nil
|
2018-05-05 20:57:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, ensure the user wants to proceed.
|
2018-05-16 02:14:53 +02:00
|
|
|
return changes, hasChanges, confirmBeforeUpdating(updateKind, stack, events, opts)
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 23:50:17 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// confirmBeforeUpdating asks the user whether to proceed. A nil error means yes.
|
2018-05-05 20:57:09 +02:00
|
|
|
func confirmBeforeUpdating(updateKind client.UpdateKind, stack backend.Stack,
|
|
|
|
events []engine.Event, opts backend.UpdateOptions) error {
|
2018-04-14 07:26:01 +02:00
|
|
|
for {
|
|
|
|
var response string
|
|
|
|
|
|
|
|
surveycore.DisableColor = true
|
|
|
|
surveycore.QuestionIcon = ""
|
|
|
|
surveycore.SelectFocusIcon = colors.ColorizeText(colors.BrightGreen + ">" + colors.Reset)
|
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
choices := []string{string(yes), string(no)}
|
2018-04-14 07:26:01 +02:00
|
|
|
|
|
|
|
// if this is a managed stack, then we can get the details for the operation, as we will
|
|
|
|
// have been able to collect the details while the preview ran. For ppc stacks, we don't
|
|
|
|
// have that information since all the PPC does is forward stdout events to us.
|
2018-05-05 20:57:09 +02:00
|
|
|
if stack.(Stack).RunLocally() && !opts.SkipPreview {
|
|
|
|
choices = append(choices, string(details))
|
|
|
|
}
|
|
|
|
|
|
|
|
var previewWarning string
|
|
|
|
if opts.SkipPreview {
|
|
|
|
previewWarning = colors.SpecWarning + " without a preview" + colors.BrightWhite
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 23:50:17 +02:00
|
|
|
|
|
|
|
if err := survey.AskOne(&survey.Select{
|
2018-05-05 20:57:09 +02:00
|
|
|
Message: "\b" + colors.ColorizeText(
|
|
|
|
colors.BrightWhite+fmt.Sprintf("Do you want to perform this %s%s?",
|
|
|
|
updateKind, previewWarning)+colors.Reset),
|
|
|
|
Options: choices,
|
2018-04-14 07:26:01 +02:00
|
|
|
Default: string(no),
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 23:50:17 +02:00
|
|
|
}, &response, nil); err != nil {
|
2018-05-05 20:57:09 +02:00
|
|
|
return errors.Wrapf(err, "confirmation cancelled, not proceeding with the %s", updateKind)
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if response == string(no) {
|
2018-05-05 20:57:09 +02:00
|
|
|
return errors.Errorf("confirmation declined, not proceeding with the %s", updateKind)
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if response == string(yes) {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if response == string(details) {
|
2018-05-05 20:57:09 +02:00
|
|
|
diff := createDiff(events, opts.Display)
|
Revise the way previews are controlled
I found the flag --force to be a strange name for skipping a preview,
since that name is usually reserved for operations that might be harmful
and yet you're coercing a tool to do it anyway, knowing there's a chance
you're going to shoot yourself in the foot.
I also found that what I almost always want in the situation where
--force was being used is to actually just run a preview and have the
confirmation auto-accepted. Going straight to --force isn't the right
thing in a CI scenario, where you actually want to run a preview first,
just to ensure there aren't any issues, before doing the update.
In a sense, there are four options here:
1. Run a preview, ask for confirmation, then do an update (the default).
2. Run a preview, auto-accept, and then do an update (the CI scenario).
3. Just run a preview with neither a confirmation nor an update (dry run).
4. Just do an update, without performing a preview beforehand (rare).
This change enables all four workflows in our CLI.
Rather than have an explosion of flags, we have a single flag,
--preview, which can specify the mode that we're operating in. The
following are the values which correlate to the above four modes:
1. "": default (no --preview specified)
2. "auto": auto-accept preview confirmation
3. "only": only run a preview, don't confirm or update
4. "skip": skip the preview altogether
As part of this change, I redid a bit of how the preview modes
were specified. Rather than booleans, which had some illegal
combinations, this change introduces a new enum type. Furthermore,
because the engine is wholly ignorant of these flags -- and only the
backend understands them -- it was confusing to me that
engine.UpdateOptions stored this flag, especially given that all
interesting engine options _also_ accepted a dryRun boolean. As of
this change, the backend.PreviewBehavior controls the preview options.
2018-04-28 23:50:17 +02:00
|
|
|
_, err := os.Stdout.WriteString(diff + "\n\n")
|
2018-04-14 07:26:01 +02:00
|
|
|
contract.IgnoreError(err)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (b *cloudBackend) PreviewThenPromptThenExecute(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx context.Context, updateKind client.UpdateKind, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-16 02:14:53 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
// First get the stack.
|
2018-05-08 03:23:03 +02:00
|
|
|
stack, err := getStack(ctx, b, stackRef)
|
2018-04-14 07:26:01 +02:00
|
|
|
if err != nil {
|
2018-05-16 02:14:53 +02:00
|
|
|
return nil, err
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
2018-05-18 22:54:23 +02:00
|
|
|
if !stack.(Stack).RunLocally() &&
|
|
|
|
(updateKind == client.UpdateKindDestroy || updateKind == client.UpdateKindRefresh) {
|
|
|
|
// The service does not support previews for PPC stacks, other than for updates. So skip the preview.
|
2018-05-05 20:57:09 +02:00
|
|
|
opts.SkipPreview = true
|
2018-05-02 04:43:38 +02:00
|
|
|
}
|
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
// Preview the operation to the user and ask them if they want to proceed.
|
2018-05-16 02:14:53 +02:00
|
|
|
changes, hasChanges, err := b.PreviewThenPrompt(ctx, updateKind, stack, pkg, root, m, opts, scopes)
|
2018-05-05 20:57:09 +02:00
|
|
|
if err != nil || !hasChanges || updateKind == client.UpdateKindPreview {
|
2018-05-16 02:14:53 +02:00
|
|
|
return changes, err
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
// Now do the real operation. We don't care about the events it issues, so just pass a nil channel along.
|
2018-05-16 02:14:53 +02:00
|
|
|
return b.updateStack(ctx, updateKind, stack, pkg, root, m, opts, nil, false /*dryRun*/, scopes)
|
2018-05-05 20:57:09 +02:00
|
|
|
}
|
2018-05-05 21:54:57 +02:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) Preview(ctx context.Context, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-16 02:14:53 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.PreviewThenPromptThenExecute(ctx, client.UpdateKindPreview, stackRef, pkg, root, m, opts, scopes)
|
2018-04-14 07:26:01 +02:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) Update(ctx context.Context, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-16 02:14:53 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.PreviewThenPromptThenExecute(ctx, client.UpdateKindUpdate, stackRef, pkg, root, m, opts, scopes)
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
}
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) Refresh(ctx context.Context, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-16 02:14:53 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.PreviewThenPromptThenExecute(ctx, client.UpdateKindRefresh, stackRef, pkg, root, m, opts, scopes)
|
2017-11-15 22:27:28 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) Destroy(ctx context.Context, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-16 02:14:53 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.PreviewThenPromptThenExecute(ctx, client.UpdateKindDestroy, stackRef, pkg, root, m, opts, scopes)
|
2017-11-15 22:27:28 +01:00
|
|
|
}
|
|
|
|
|
2018-04-10 21:03:11 +02:00
|
|
|
func (b *cloudBackend) createAndStartUpdate(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx context.Context, action client.UpdateKind, stackRef backend.StackReference,
|
2018-04-14 07:26:01 +02:00
|
|
|
pkg *workspace.Project, root string, m backend.UpdateMetadata,
|
2018-05-05 20:57:09 +02:00
|
|
|
opts backend.UpdateOptions, dryRun bool) (client.UpdateIdentifier, int, string, error) {
|
2018-01-08 22:01:40 +01:00
|
|
|
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
2018-03-30 18:21:55 +02:00
|
|
|
return client.UpdateIdentifier{}, 0, "", err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
programContext, main, err := getContextAndMain(pkg, root)
|
2018-01-31 02:57:48 +01:00
|
|
|
if err != nil {
|
2018-03-30 18:21:55 +02:00
|
|
|
return client.UpdateIdentifier{}, 0, "", err
|
2018-01-31 02:57:48 +01:00
|
|
|
}
|
2018-04-18 20:25:16 +02:00
|
|
|
workspaceStack, err := workspace.DetectProjectStack(stackRef.StackName())
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
2018-03-30 18:21:55 +02:00
|
|
|
return client.UpdateIdentifier{}, 0, "", errors.Wrap(err, "getting configuration")
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
metadata := apitype.UpdateMetadata{
|
|
|
|
Message: m.Message,
|
|
|
|
Environment: m.Environment,
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
getContents := func() (io.ReadCloser, int64, error) {
|
|
|
|
const showProgress = true
|
2018-05-08 03:23:03 +02:00
|
|
|
return getUpdateContents(programContext, pkg.UseDefaultIgnores(), showProgress)
|
2018-03-21 18:33:34 +01:00
|
|
|
}
|
2018-04-14 07:26:01 +02:00
|
|
|
update, err := b.client.CreateUpdate(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx, action, stack, pkg, workspaceStack.Config, main, metadata, opts.Engine, dryRun, getContents)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-03-30 18:21:55 +02:00
|
|
|
return client.UpdateIdentifier{}, 0, "", err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-04-09 18:31:46 +02:00
|
|
|
// Start the update. We use this opportunity to pass new tags to the service, to pick up any
|
|
|
|
// metadata changes.
|
2018-04-11 19:08:32 +02:00
|
|
|
tags, err := backend.GetStackTags()
|
2018-04-09 18:31:46 +02:00
|
|
|
if err != nil {
|
|
|
|
return client.UpdateIdentifier{}, 0, "", errors.Wrap(err, "getting stack tags")
|
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
version, token, err := b.client.StartUpdate(ctx, update, tags)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-03-30 18:21:55 +02:00
|
|
|
return client.UpdateIdentifier{}, 0, "", err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
if action == client.UpdateKindUpdate {
|
2018-05-16 00:28:00 +02:00
|
|
|
logging.V(7).Infof("Stack %s being updated to version %d", stackRef, version)
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-03-30 18:21:55 +02:00
|
|
|
return update, version, token, nil
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// updateStack performs a the provided type of update on a stack hosted in the Pulumi Cloud.
|
2018-04-14 07:26:01 +02:00
|
|
|
func (b *cloudBackend) updateStack(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx context.Context, action client.UpdateKind, stack backend.Stack, pkg *workspace.Project,
|
2018-05-05 20:57:09 +02:00
|
|
|
root string, m backend.UpdateMetadata, opts backend.UpdateOptions,
|
|
|
|
callerEventsOpt chan<- engine.Event, dryRun bool,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-03-22 18:42:43 +01:00
|
|
|
|
|
|
|
// Print a banner so it's clear this is going to the cloud.
|
2018-04-14 07:26:01 +02:00
|
|
|
actionLabel := getActionLabel(string(action), dryRun)
|
2018-03-22 18:42:43 +01:00
|
|
|
fmt.Printf(
|
2018-04-20 08:54:33 +02:00
|
|
|
colors.ColorizeText(colors.BrightMagenta+"%s stack '%s'"+colors.Reset+"\n"),
|
2018-04-18 12:19:13 +02:00
|
|
|
actionLabel, stack.Name())
|
2018-03-22 18:42:43 +01:00
|
|
|
|
2018-03-30 18:21:55 +02:00
|
|
|
// Create an update object (except if this won't yield an update; i.e., doing a local preview).
|
2018-03-22 18:42:43 +01:00
|
|
|
var update client.UpdateIdentifier
|
2018-03-30 18:21:55 +02:00
|
|
|
var version int
|
|
|
|
var token string
|
2018-04-14 07:26:01 +02:00
|
|
|
var err error
|
|
|
|
if !stack.(Stack).RunLocally() || !dryRun {
|
2018-05-08 03:23:03 +02:00
|
|
|
update, version, token, err = b.createAndStartUpdate(ctx, action, stack.Name(), pkg, root, m, opts, dryRun)
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
|
|
|
if err != nil {
|
2018-05-05 20:57:09 +02:00
|
|
|
return nil, err
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-03-30 18:21:55 +02:00
|
|
|
if version != 0 {
|
|
|
|
// Print a URL afterwards to redirect to the version URL.
|
2018-04-18 12:19:13 +02:00
|
|
|
base := b.cloudConsoleStackPath(update.StackIdentifier)
|
2018-03-30 18:21:55 +02:00
|
|
|
if link := b.CloudConsoleURL(base, "updates", strconv.Itoa(version)); link != "" {
|
|
|
|
defer func() {
|
|
|
|
fmt.Printf(
|
|
|
|
colors.ColorizeText(
|
|
|
|
colors.BrightMagenta+"Permalink: %s"+colors.Reset+"\n"), link)
|
|
|
|
}()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we are targeting a stack that uses local operations, run the appropriate engine action locally.
|
|
|
|
if stack.(Stack).RunLocally() {
|
2018-04-14 07:26:01 +02:00
|
|
|
return b.runEngineAction(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx, action, stack.Name(), pkg, root, opts, update, token, callerEventsOpt, dryRun, scopes)
|
2018-03-30 18:21:55 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, wait for the update to complete while rendering its events to stdout/stderr.
|
2018-05-08 03:23:03 +02:00
|
|
|
status, err := b.waitForUpdate(ctx, actionLabel, update, opts.Display)
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
2018-05-05 20:57:09 +02:00
|
|
|
return nil, errors.Wrapf(err, "waiting for %s", action)
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
} else if status != apitype.StatusSucceeded {
|
2018-05-05 20:57:09 +02:00
|
|
|
return nil, errors.Errorf("%s unsuccessful: status %v", action, status)
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-03-30 18:21:55 +02:00
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
return nil, nil
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-01-31 02:57:48 +01:00
|
|
|
// uploadArchive archives the current Pulumi program and uploads it to a signed URL. "current"
|
2017-11-15 22:27:28 +01:00
|
|
|
// meaning whatever Pulumi program is found in the CWD or parent directory.
|
|
|
|
// If set, printSize will print the size of the data being uploaded.
|
2018-03-21 18:33:34 +01:00
|
|
|
func getUpdateContents(context string, useDefaultIgnores bool, progress bool) (io.ReadCloser, int64, error) {
|
2018-01-31 02:57:48 +01:00
|
|
|
archiveContents, err := archive.Process(context, useDefaultIgnores)
|
2017-11-01 22:55:16 +01:00
|
|
|
if err != nil {
|
2018-03-21 18:33:34 +01:00
|
|
|
return nil, 0, errors.Wrap(err, "creating archive")
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
2018-03-21 18:33:34 +01:00
|
|
|
|
|
|
|
archiveReader := ioutil.NopCloser(archiveContents)
|
2017-11-01 22:55:16 +01:00
|
|
|
|
2017-12-03 00:17:59 +01:00
|
|
|
// If progress is requested, show a little animated ASCII progress bar.
|
|
|
|
if progress {
|
|
|
|
bar := pb.New(archiveContents.Len())
|
2018-03-21 18:33:34 +01:00
|
|
|
archiveReader = newBarProxyReadCloser(bar, archiveReader)
|
2017-12-03 00:17:59 +01:00
|
|
|
bar.Prefix(colors.ColorizeText(colors.SpecUnimportant + "Uploading program: "))
|
|
|
|
bar.Postfix(colors.ColorizeText(colors.Reset))
|
|
|
|
bar.SetMaxWidth(80)
|
|
|
|
bar.SetUnits(pb.U_BYTES)
|
|
|
|
bar.Start()
|
2017-11-15 22:27:28 +01:00
|
|
|
}
|
2017-11-01 22:55:16 +01:00
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
return archiveReader, int64(archiveContents.Len()), nil
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-04-10 21:03:11 +02:00
|
|
|
func (b *cloudBackend) runEngineAction(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx context.Context, action client.UpdateKind, stackRef backend.StackReference, pkg *workspace.Project,
|
2018-05-05 20:57:09 +02:00
|
|
|
root string, opts backend.UpdateOptions, update client.UpdateIdentifier, token string,
|
|
|
|
callerEventsOpt chan<- engine.Event, dryRun bool,
|
|
|
|
scopes backend.CancellationScopeSource) (engine.ResourceChanges, error) {
|
2018-05-18 22:54:23 +02:00
|
|
|
contract.Assertf(dryRun || token != "", "expected a non-empty token when doing a non-dryrun update")
|
2018-03-22 18:42:43 +01:00
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
u, err := b.newUpdate(ctx, stackRef, pkg, root, update, token)
|
2018-03-22 18:42:43 +01:00
|
|
|
if err != nil {
|
2018-05-05 20:57:09 +02:00
|
|
|
return nil, err
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
persister := b.newSnapshotPersister(ctx, u.update, u.tokenSource)
|
2018-05-18 20:15:35 +02:00
|
|
|
manager := backend.NewSnapshotManager(persister, u.GetTarget().Snapshot)
|
2018-04-14 07:26:01 +02:00
|
|
|
displayEvents := make(chan engine.Event)
|
|
|
|
displayDone := make(chan bool)
|
2018-03-22 18:42:43 +01:00
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
go u.RecordAndDisplayEvents(getActionLabel(string(action), dryRun), displayEvents, displayDone, opts.Display)
|
2018-04-14 07:26:01 +02:00
|
|
|
|
|
|
|
engineEvents := make(chan engine.Event)
|
2018-04-20 03:59:14 +02:00
|
|
|
|
|
|
|
scope := scopes.NewScope(engineEvents, dryRun)
|
|
|
|
defer scope.Close()
|
|
|
|
|
2018-05-17 00:37:34 +02:00
|
|
|
eventsDone := make(chan bool)
|
2018-04-14 07:26:01 +02:00
|
|
|
go func() {
|
|
|
|
// Pull in all events from the engine and send to them to the two listeners.
|
|
|
|
for e := range engineEvents {
|
|
|
|
displayEvents <- e
|
|
|
|
|
|
|
|
if callerEventsOpt != nil {
|
|
|
|
callerEventsOpt <- e
|
|
|
|
}
|
|
|
|
}
|
2018-05-17 00:37:34 +02:00
|
|
|
|
|
|
|
close(eventsDone)
|
2018-04-14 07:26:01 +02:00
|
|
|
}()
|
2018-03-22 18:42:43 +01:00
|
|
|
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
// Depending on the action, kick off the relevant engine activity. Note that we don't immediately check and
|
|
|
|
// return error conditions, because we will do so below after waiting for the display channels to close.
|
2018-05-05 20:57:09 +02:00
|
|
|
var changes engine.ResourceChanges
|
2018-04-23 23:12:13 +02:00
|
|
|
engineCtx := &engine.Context{Cancel: scope.Context(), Events: engineEvents, SnapshotManager: manager}
|
2018-05-08 03:23:03 +02:00
|
|
|
if parentSpan := opentracing.SpanFromContext(ctx); parentSpan != nil {
|
|
|
|
engineCtx.ParentSpan = parentSpan.Context()
|
|
|
|
}
|
|
|
|
|
2018-03-22 18:42:43 +01:00
|
|
|
switch action {
|
2018-05-05 20:57:09 +02:00
|
|
|
case client.UpdateKindPreview:
|
2018-05-18 23:58:06 +02:00
|
|
|
changes, err = engine.Update(u, engineCtx, opts.Engine, true)
|
2018-03-22 18:42:43 +01:00
|
|
|
case client.UpdateKindUpdate:
|
2018-05-18 23:58:06 +02:00
|
|
|
changes, err = engine.Update(u, engineCtx, opts.Engine, dryRun)
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
case client.UpdateKindRefresh:
|
2018-05-05 20:57:09 +02:00
|
|
|
changes, err = engine.Refresh(u, engineCtx, opts.Engine, dryRun)
|
2018-03-22 18:42:43 +01:00
|
|
|
case client.UpdateKindDestroy:
|
2018-05-05 20:57:09 +02:00
|
|
|
changes, err = engine.Destroy(u, engineCtx, opts.Engine, dryRun)
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
default:
|
|
|
|
contract.Failf("Unrecognized action type: %s", action)
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
// Wait for the display to finish showing all the events.
|
|
|
|
<-displayDone
|
|
|
|
close(engineEvents)
|
|
|
|
close(displayEvents)
|
|
|
|
close(displayDone)
|
2018-04-23 23:12:13 +02:00
|
|
|
contract.IgnoreClose(manager)
|
2018-03-22 18:42:43 +01:00
|
|
|
|
2018-05-17 00:37:34 +02:00
|
|
|
// Make sure that the goroutine writing to displayEvents and callerEventsOpt
|
|
|
|
// has exited before proceeding
|
|
|
|
<-eventsDone
|
2018-04-14 07:26:01 +02:00
|
|
|
if !dryRun {
|
2018-03-22 18:42:43 +01:00
|
|
|
status := apitype.UpdateStatusSucceeded
|
|
|
|
if err != nil {
|
|
|
|
status = apitype.UpdateStatusFailed
|
|
|
|
}
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-03-22 18:42:43 +01:00
|
|
|
completeErr := u.Complete(status)
|
|
|
|
if completeErr != nil {
|
|
|
|
err = multierror.Append(err, completeErr)
|
|
|
|
}
|
|
|
|
}
|
2018-04-14 07:26:01 +02:00
|
|
|
|
2018-05-05 20:57:09 +02:00
|
|
|
return changes, err
|
2018-03-22 18:42:43 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) CancelCurrentUpdate(ctx context.Context, stackRef backend.StackReference) error {
|
2018-04-20 07:55:52 +02:00
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
2018-04-19 19:09:32 +02:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
stack, err := b.client.GetStack(ctx, stackID)
|
2018-04-19 19:09:32 +02:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Compute the update identifier and attempt to cancel the update.
|
|
|
|
//
|
|
|
|
// NOTE: the update kind is not relevant; the same endpoint will work for updates of all kinds.
|
|
|
|
updateID := client.UpdateIdentifier{
|
|
|
|
StackIdentifier: stackID,
|
|
|
|
UpdateKind: client.UpdateKindUpdate,
|
|
|
|
UpdateID: stack.ActiveUpdate,
|
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.client.CancelUpdate(ctx, updateID)
|
2018-04-19 19:09:32 +02:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) GetHistory(ctx context.Context, stackRef backend.StackReference) ([]backend.UpdateInfo, error) {
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2018-01-25 03:22:41 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
updates, err := b.client.GetStackUpdates(ctx, stack)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-01-25 03:22:41 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Convert apitype.UpdateInfo objects to the backend type.
|
|
|
|
var beUpdates []backend.UpdateInfo
|
2018-03-21 18:33:34 +01:00
|
|
|
for _, update := range updates {
|
2018-01-25 03:22:41 +01:00
|
|
|
// Convert types from the apitype package into their internal counterparts.
|
|
|
|
cfg, err := convertConfig(update.Config)
|
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrap(err, "converting configuration")
|
|
|
|
}
|
|
|
|
|
2018-03-08 22:56:59 +01:00
|
|
|
beUpdates = append(beUpdates, backend.UpdateInfo{
|
|
|
|
Kind: backend.UpdateKind(update.Kind),
|
|
|
|
Message: update.Message,
|
|
|
|
Environment: update.Environment,
|
|
|
|
Config: cfg,
|
|
|
|
Result: backend.UpdateResult(update.Result),
|
|
|
|
StartTime: update.StartTime,
|
|
|
|
EndTime: update.EndTime,
|
|
|
|
ResourceChanges: convertResourceChanges(update.ResourceChanges),
|
|
|
|
})
|
2018-01-25 03:22:41 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
return beUpdates, nil
|
|
|
|
}
|
|
|
|
|
2018-04-27 01:13:52 +02:00
|
|
|
func (b *cloudBackend) GetLatestConfiguration(ctx context.Context,
|
|
|
|
stackRef backend.StackReference) (config.Map, error) {
|
|
|
|
|
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return b.client.GetLatestConfiguration(ctx, stackID)
|
|
|
|
}
|
|
|
|
|
2018-01-25 03:22:41 +01:00
|
|
|
// convertResourceChanges converts the apitype version of engine.ResourceChanges into the internal version.
|
|
|
|
func convertResourceChanges(changes map[apitype.OpType]int) engine.ResourceChanges {
|
|
|
|
b := make(engine.ResourceChanges)
|
|
|
|
for k, v := range changes {
|
|
|
|
b[deploy.StepOp(k)] = v
|
|
|
|
}
|
|
|
|
return b
|
|
|
|
}
|
|
|
|
|
|
|
|
// convertResourceChanges converts the apitype version of config.Map into the internal version.
|
|
|
|
func convertConfig(apiConfig map[string]apitype.ConfigValue) (config.Map, error) {
|
|
|
|
c := make(config.Map)
|
2018-03-02 01:51:09 +01:00
|
|
|
for rawK, rawV := range apiConfig {
|
|
|
|
k, err := config.ParseKey(rawK)
|
2018-01-25 03:22:41 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-03-02 01:51:09 +01:00
|
|
|
if rawV.Secret {
|
|
|
|
c[k] = config.NewSecureValue(rawV.String)
|
2018-01-25 03:22:41 +01:00
|
|
|
} else {
|
2018-03-02 01:51:09 +01:00
|
|
|
c[k] = config.NewValue(rawV.String)
|
2018-01-25 03:22:41 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return c, nil
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) GetLogs(ctx context.Context, stackRef backend.StackReference,
|
2018-04-18 01:37:52 +02:00
|
|
|
logQuery operations.LogQuery) ([]operations.LogEntry, error) {
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
stack, err := b.GetStack(ctx, stackRef)
|
2017-11-09 21:38:03 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-03-28 21:47:12 +02:00
|
|
|
if stack == nil {
|
|
|
|
return nil, errors.New("stack not found")
|
|
|
|
}
|
2017-11-09 21:38:03 +01:00
|
|
|
|
2018-03-27 23:28:35 +02:00
|
|
|
// If we're dealing with a stack that runs its operations locally, get the stack's target and fetch the logs
|
|
|
|
// directly
|
|
|
|
if stack.(Stack).RunLocally() {
|
2018-05-08 03:23:03 +02:00
|
|
|
target, targetErr := b.getTarget(ctx, stackRef)
|
2018-03-27 23:28:35 +02:00
|
|
|
if targetErr != nil {
|
|
|
|
return nil, targetErr
|
|
|
|
}
|
|
|
|
return local.GetLogsForTarget(target, logQuery)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, fetch the logs from the service.
|
2018-04-18 12:19:13 +02:00
|
|
|
stackID, err := b.getCloudStackIdentifier(stackRef)
|
2018-03-27 23:28:35 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.client.GetStackLogs(ctx, stackID, logQuery)
|
2017-11-09 21:38:03 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) ExportDeployment(ctx context.Context,
|
|
|
|
stackRef backend.StackReference) (*apitype.UntypedDeployment, error) {
|
|
|
|
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2018-01-05 21:46:13 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
deployment, err := b.client.ExportStackDeployment(ctx, stack)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-01-05 21:46:13 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-04-18 01:01:52 +02:00
|
|
|
return &deployment, nil
|
2018-01-05 21:46:13 +01:00
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) ImportDeployment(ctx context.Context, stackRef backend.StackReference,
|
|
|
|
deployment *apitype.UntypedDeployment) error {
|
|
|
|
|
2018-04-18 12:19:13 +02:00
|
|
|
stack, err := b.getCloudStackIdentifier(stackRef)
|
2018-01-05 21:46:13 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2018-05-08 03:23:03 +02:00
|
|
|
update, err := b.client.ImportStackDeployment(ctx, stack, deployment.Deployment)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
2018-01-05 21:46:13 +01:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Wait for the import to complete, which also polls and renders event output to STDOUT.
|
2018-04-14 07:26:01 +02:00
|
|
|
status, err := b.waitForUpdate(
|
2018-05-08 03:23:03 +02:00
|
|
|
ctx, getActionLabel("import", false /*dryRun*/), update,
|
2018-04-14 07:26:01 +02:00
|
|
|
backend.DisplayOptions{Color: colors.Always})
|
2018-01-05 21:46:13 +01:00
|
|
|
if err != nil {
|
|
|
|
return errors.Wrap(err, "waiting for import")
|
|
|
|
} else if status != apitype.StatusSucceeded {
|
|
|
|
return errors.Errorf("import unsuccessful: status %v", status)
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-05-22 01:17:12 +02:00
|
|
|
// getCloudStackIdentifier returns information about the given stack in the current repository and project, based on
|
|
|
|
// the current working directory.
|
|
|
|
func (b *cloudBackend) getCloudStackIdentifier(stackRef backend.StackReference) (client.StackIdentifier, error) {
|
|
|
|
owner := stackRef.(cloudBackendReference).owner
|
|
|
|
var err error
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-17 01:15:10 +02:00
|
|
|
|
2018-04-18 12:19:13 +02:00
|
|
|
if owner == "" {
|
2018-05-08 03:23:03 +02:00
|
|
|
owner, err = b.client.GetPulumiAccountName(context.Background())
|
2018-04-18 12:19:13 +02:00
|
|
|
if err != nil {
|
2018-05-22 01:17:12 +02:00
|
|
|
return client.StackIdentifier{}, err
|
2018-04-18 12:19:13 +02:00
|
|
|
}
|
Remove the need to `pulumi init` for the local backend
This change removes the need to `pulumi init` when targeting the local
backend. A fair amount of the change lays the foundation that the next
set of changes to stop having `pulumi init` be used for cloud stacks
as well.
Previously, `pulumi init` logically did two things:
1. It created the bookkeeping directory for local stacks, this was
stored in `<repository-root>/.pulumi`, where `<repository-root>` was
the path to what we belived the "root" of your project was. In the
case of git repositories, this was the directory that contained your
`.git` folder.
2. It recorded repository information in
`<repository-root>/.pulumi/repository.json`. This was used by the
cloud backend when computing what project to interact with on
Pulumi.com
The new identity model will remove the need for (2), since we only
need an owner and stack name to fully qualify a stack on
pulumi.com, so it's easy enough to stop creating a folder just for
that.
However, for the local backend, we need to continue to retain some
information about stacks (e.g. checkpoints, history, etc). In
addition, we need to store our workspace settings (which today just
contains the selected stack) somehere.
For state stored by the local backend, we change the URL scheme from
`local://` to `local://<optional-root-path>`. When
`<optional-root-path>` is unset, it defaults to `$HOME`. We create our
`.pulumi` folder in that directory. This is important because stack
names now must be unique within the backend, but we have some tests
using local stacks which use fixed stack names, so each integration
test really wants its own "view" of the world.
For the workspace settings, we introduce a new `workspaces` directory
in `~/.pulumi`. In this folder we write the workspace settings file
for each project. The file name is the name of the project, combined
with the SHA1 of the path of the project file on disk, to ensure that
multiple pulumi programs with the same project name have different
workspace settings.
This does mean that moving a project's location on disk will cause the
CLI to "forget" what the selected stack was, which is unfortunate, but
not the end of the world. If this ends up being a big pain point, we
can certianly try to play games in the future (for example, if we saw
a .git folder in a parent folder, we could store data in there).
With respect to compatibility, we don't attempt to migrate older files
to their newer locations. For long lived stacks managed using the
local backend, we can provide information on where to move things
to. For all stacks (regardless of backend) we'll require the user to
`pulumi stack select` their stack again, but that seems like the
correct trade-off vs writing complicated upgrade code.
2018-04-17 01:15:10 +02:00
|
|
|
}
|
|
|
|
|
2018-03-21 18:33:34 +01:00
|
|
|
return client.StackIdentifier{
|
2018-05-22 01:17:12 +02:00
|
|
|
Owner: owner,
|
|
|
|
Stack: string(stackRef.StackName()),
|
2017-11-01 22:55:16 +01:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2018-01-18 23:33:03 +01:00
|
|
|
type DisplayEventType string
|
|
|
|
|
|
|
|
const (
|
|
|
|
UpdateEvent DisplayEventType = "UpdateEvent"
|
|
|
|
ShutdownEvent DisplayEventType = "Shutdown"
|
|
|
|
)
|
|
|
|
|
|
|
|
type displayEvent struct {
|
|
|
|
Kind DisplayEventType
|
|
|
|
Payload interface{}
|
|
|
|
}
|
|
|
|
|
2017-11-01 22:55:16 +01:00
|
|
|
// waitForUpdate waits for the current update of a Pulumi program to reach a terminal state. Returns the
|
|
|
|
// final state. "path" is the URL endpoint to poll for updates.
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) waitForUpdate(ctx context.Context, actionLabel string, update client.UpdateIdentifier,
|
2018-03-21 18:33:34 +01:00
|
|
|
displayOpts backend.DisplayOptions) (apitype.UpdateStatus, error) {
|
|
|
|
|
2018-02-08 01:31:01 +01:00
|
|
|
events, done := make(chan displayEvent), make(chan bool)
|
2018-01-18 23:33:03 +01:00
|
|
|
defer func() {
|
|
|
|
events <- displayEvent{Kind: ShutdownEvent, Payload: nil}
|
|
|
|
<-done
|
|
|
|
close(events)
|
|
|
|
close(done)
|
|
|
|
}()
|
2018-03-21 18:33:34 +01:00
|
|
|
go displayEvents(strings.ToLower(actionLabel), events, done, displayOpts)
|
2018-01-18 23:33:03 +01:00
|
|
|
|
2018-04-21 00:48:23 +02:00
|
|
|
// The UpdateEvents API returns a continuation token to only get events after the previous call.
|
|
|
|
var continuationToken *string
|
2017-11-01 22:55:16 +01:00
|
|
|
for {
|
Make the CLI's waitForUpdates more resilient to transient failure
We saw an issue where a user was mid-update, and got a networking
error stating `read: operation timed out`. We believe this was simply
a local client error, due to a flaky network. We should be resilient
to such things during updates, particularly when there's no way to
"reattach" to an in-progress udpate (see pulumi/pulumi#762).
This change accomplishes this by changing our retry logic in the
cloud backend's waitForUpdates function. Namely:
* We recognize three types of failure, and react differently:
- Expected HTTP errors. For instance, the 504 Gateway Timeouts
that we already retried in the face of. In these cases, we will
silently retry up to 10 times. After 10 times, we begin warning
the user just in case this is a persistent condition.
- Unexpected HTTP errors. The CLI will quit immediately and issue
an error to the user, in the usual ways. This covers
Unauthorized among other things. Over time, we may find that we
want to intentionally move some HTTP errors into the above.
- Anything else. This covers the transient networking errors case
that we have just seen. I'll admit, it's a wide net, but any
instance of this error issues a warning and it's up to the user
to ^C out of it. We also log the error so that we'll see it if
the user shares their logs with us.
* We implement backoff logic so that we retry very quickly (100ms)
on the first failure, and more slowly thereafter (1.5x, up to a max
of 5 seconds). This helps to avoid accidentally DoSing our service.
2017-12-23 19:15:08 +01:00
|
|
|
// Query for the latest update results, including log entries so we can provide active status updates.
|
2017-12-26 18:39:49 +01:00
|
|
|
_, results, err := retry.Until(context.Background(), retry.Acceptor{
|
|
|
|
Accept: func(try int, nextRetryTime time.Duration) (bool, interface{}, error) {
|
2018-05-08 03:23:03 +02:00
|
|
|
return b.tryNextUpdate(ctx, update, continuationToken, try, nextRetryTime)
|
2017-12-26 18:39:49 +01:00
|
|
|
},
|
|
|
|
})
|
|
|
|
if err != nil {
|
2017-12-13 17:33:11 +01:00
|
|
|
return apitype.StatusFailed, err
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
Make the CLI's waitForUpdates more resilient to transient failure
We saw an issue where a user was mid-update, and got a networking
error stating `read: operation timed out`. We believe this was simply
a local client error, due to a flaky network. We should be resilient
to such things during updates, particularly when there's no way to
"reattach" to an in-progress udpate (see pulumi/pulumi#762).
This change accomplishes this by changing our retry logic in the
cloud backend's waitForUpdates function. Namely:
* We recognize three types of failure, and react differently:
- Expected HTTP errors. For instance, the 504 Gateway Timeouts
that we already retried in the face of. In these cases, we will
silently retry up to 10 times. After 10 times, we begin warning
the user just in case this is a persistent condition.
- Unexpected HTTP errors. The CLI will quit immediately and issue
an error to the user, in the usual ways. This covers
Unauthorized among other things. Over time, we may find that we
want to intentionally move some HTTP errors into the above.
- Anything else. This covers the transient networking errors case
that we have just seen. I'll admit, it's a wide net, but any
instance of this error issues a warning and it's up to the user
to ^C out of it. We also log the error so that we'll see it if
the user shares their logs with us.
* We implement backoff logic so that we retry very quickly (100ms)
on the first failure, and more slowly thereafter (1.5x, up to a max
of 5 seconds). This helps to avoid accidentally DoSing our service.
2017-12-23 19:15:08 +01:00
|
|
|
// We got a result, print it out.
|
2017-12-26 18:39:49 +01:00
|
|
|
updateResults := results.(apitype.UpdateResults)
|
2017-11-01 22:55:16 +01:00
|
|
|
for _, event := range updateResults.Events {
|
2018-01-18 23:33:03 +01:00
|
|
|
events <- displayEvent{Kind: UpdateEvent, Payload: event}
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
|
2018-04-21 00:48:23 +02:00
|
|
|
continuationToken = updateResults.ContinuationToken
|
|
|
|
// A nil continuation token means there are no more events to read and the update has finished.
|
|
|
|
if continuationToken == nil {
|
2018-01-11 21:05:08 +01:00
|
|
|
return updateResults.Status, nil
|
2017-11-01 22:55:16 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-14 07:26:01 +02:00
|
|
|
func displayEvents(
|
|
|
|
action string, events <-chan displayEvent, done chan<- bool, opts backend.DisplayOptions) {
|
|
|
|
|
2018-02-21 18:42:06 +01:00
|
|
|
prefix := fmt.Sprintf("%s%s...", cmdutil.EmojiOr("✨ ", "@ "), action)
|
2018-04-10 21:03:11 +02:00
|
|
|
spinner, ticker := cmdutil.NewSpinnerAndTicker(prefix, nil, 8 /*timesPerSecond*/)
|
2018-01-18 23:33:03 +01:00
|
|
|
|
|
|
|
defer func() {
|
2018-02-16 03:22:17 +01:00
|
|
|
spinner.Reset()
|
2018-01-18 23:33:03 +01:00
|
|
|
ticker.Stop()
|
|
|
|
done <- true
|
|
|
|
}()
|
|
|
|
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case <-ticker.C:
|
|
|
|
spinner.Tick()
|
|
|
|
case event := <-events:
|
|
|
|
if event.Kind == ShutdownEvent {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pluck out the string.
|
Implement a refresh command
This change implements a `pulumi refresh` command. It operates a bit
like `pulumi update`, and friends, in that it supports `--preview` and
`--diff`, along with the usual flags, and will update your checkpoint.
It works through substitution of the deploy.Source abstraction, which
generates a sequence of resource registration events. This new
deploy.RefreshSource takes in a prior checkpoint and will walk it,
refreshing the state via the associated resource providers by invoking
Read for each resource encountered, and merging the resulting state with
the prior checkpoint, to yield a new resource.Goal state. This state is
then fed through the engine in the usual ways with a few minor caveats:
namely, although the engine must generate steps for the logical
operations (permitting us to get nice summaries, progress, and diffs),
it mustn't actually carry them out because the state being imported
already reflects reality (a deleted resource has *already* been deleted,
so of course the engine need not perform the deletion). The diffing
logic also needs to know how to treat the case of refresh slightly
differently, because we are going to be diffing outputs and not inputs.
Note that support for managed stacks is not yet complete, since that
requires updates to the service to support a refresh endpoint. That
will be coming soon ...
2018-04-10 20:22:39 +02:00
|
|
|
payload := event.Payload.(apitype.UpdateEvent)
|
2018-01-18 23:33:03 +01:00
|
|
|
if raw, ok := payload.Fields["text"]; ok && raw != nil {
|
|
|
|
if text, ok := raw.(string); ok {
|
2018-01-31 18:41:42 +01:00
|
|
|
text = opts.Color.Colorize(text)
|
2018-01-18 23:33:03 +01:00
|
|
|
|
|
|
|
// Choose the stream to write to (by default stdout).
|
|
|
|
var stream io.Writer
|
|
|
|
if payload.Kind == apitype.StderrEvent {
|
|
|
|
stream = os.Stderr
|
|
|
|
} else {
|
|
|
|
stream = os.Stdout
|
|
|
|
}
|
|
|
|
|
2018-02-16 03:22:17 +01:00
|
|
|
if text != "" {
|
|
|
|
spinner.Reset()
|
|
|
|
fmt.Fprint(stream, text)
|
|
|
|
}
|
2018-01-18 23:33:03 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-21 00:48:23 +02:00
|
|
|
// tryNextUpdate tries to get the next update for a Pulumi program. This may time or error out, which results in a
|
2017-12-26 18:39:49 +01:00
|
|
|
// false returned in the first return value. If a non-nil error is returned, this operation should fail.
|
2018-05-08 03:23:03 +02:00
|
|
|
func (b *cloudBackend) tryNextUpdate(ctx context.Context, update client.UpdateIdentifier, continuationToken *string,
|
|
|
|
try int, nextRetryTime time.Duration) (bool, interface{}, error) {
|
2017-12-26 18:39:49 +01:00
|
|
|
|
|
|
|
// If there is no error, we're done.
|
2018-05-08 03:23:03 +02:00
|
|
|
results, err := b.client.GetUpdateEvents(ctx, update, continuationToken)
|
2017-12-26 18:39:49 +01:00
|
|
|
if err == nil {
|
|
|
|
return true, results, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// There are three kinds of errors we might see:
|
|
|
|
// 1) Expected HTTP errors (like timeouts); silently retry.
|
|
|
|
// 2) Unexpected HTTP errors (like Unauthorized, etc); exit with an error.
|
|
|
|
// 3) Anything else; this could be any number of things, including transient errors (flaky network).
|
|
|
|
// In this case, we warn the user and keep retrying; they can ^C if it's not transient.
|
|
|
|
warn := true
|
|
|
|
if errResp, ok := err.(*apitype.ErrorResponse); ok {
|
|
|
|
if errResp.Code == 504 {
|
|
|
|
// If our request to the Pulumi Service returned a 504 (Gateway Timeout), ignore it and keep
|
|
|
|
// continuing. The sole exception is if we've done this 10 times. At that point, we will have
|
|
|
|
// been waiting for many seconds, and want to let the user know something might be wrong.
|
|
|
|
// TODO(pulumi/pulumi-ppc/issues/60): Elminate these timeouts all together.
|
|
|
|
if try < 10 {
|
|
|
|
warn = false
|
|
|
|
}
|
2018-05-16 00:28:00 +02:00
|
|
|
logging.V(3).Infof("Expected %s HTTP %d error after %d retries (retrying): %v",
|
2018-03-21 18:33:34 +01:00
|
|
|
b.CloudURL(), errResp.Code, try, err)
|
2017-12-26 18:39:49 +01:00
|
|
|
} else {
|
|
|
|
// Otherwise, we will issue an error.
|
2018-05-16 00:28:00 +02:00
|
|
|
logging.V(3).Infof("Unexpected %s HTTP %d error after %d retries (erroring): %v",
|
2018-03-21 18:33:34 +01:00
|
|
|
b.CloudURL(), errResp.Code, try, err)
|
2017-12-26 18:39:49 +01:00
|
|
|
return false, nil, err
|
|
|
|
}
|
|
|
|
} else {
|
2018-05-16 00:28:00 +02:00
|
|
|
logging.V(3).Infof("Unexpected %s error after %d retries (retrying): %v", b.CloudURL(), try, err)
|
2017-12-26 18:39:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// Issue a warning if appropriate.
|
|
|
|
if warn {
|
2018-04-10 21:03:11 +02:00
|
|
|
b.d.Warningf(diag.Message("" /*urn*/, "error querying update status: %v"), err)
|
|
|
|
b.d.Warningf(diag.Message("" /*urn*/, "retrying in %vs... ^C to stop (this will not cancel the update)"),
|
2017-12-26 18:39:49 +01:00
|
|
|
nextRetryTime.Seconds())
|
|
|
|
}
|
|
|
|
|
|
|
|
return false, nil, nil
|
|
|
|
}
|
|
|
|
|
2018-04-05 00:31:01 +02:00
|
|
|
// IsValidAccessToken tries to use the provided Pulumi access token and returns if it is accepted
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
// or not. Returns error on any unexpected error.
|
2018-05-08 03:23:03 +02:00
|
|
|
func IsValidAccessToken(ctx context.Context, cloudURL, accessToken string) (bool, error) {
|
Fix false-positives in login verification (#825)
Surprisingly `pulumi login -c https://google.com` would succeed. This was because we were too lax in our way of validating credentials. We take the provided cloud URL and call the "GetCurrentUserHandler" method. But we were only checking that it returned a successful response, not that it was actually valid JSON.
So in the "https://google.com" case, Google returned HTML describing a 404 error, but since the sever response was 200, the Pulumi CLI assumed things were on the up and up.
We now parse the response as JSON, and confirm the response has a `name` property that is non-nil. This heuristic covers the majority of false-positive cases, but without us needing to move all of the service's API shape for users, which includes organizations, which includes Clouds, etc. into `pulumi`.
Fixes https://github.com/pulumi/pulumi-service/issues/457. As an added bonus, we now return a much more useful error message.
2018-01-21 04:11:38 +01:00
|
|
|
// Make a request to get the authenticated user. If it returns a successful response,
|
|
|
|
// we know the access token is legit. We also parse the response as JSON and confirm
|
2018-04-04 20:05:41 +02:00
|
|
|
// it has a githubLogin field that is non-empty (like the Pulumi Service would return).
|
2018-05-08 03:23:03 +02:00
|
|
|
_, err := client.NewClient(cloudURL, accessToken).GetPulumiAccountName(ctx)
|
2018-03-21 18:33:34 +01:00
|
|
|
if err != nil {
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
if errResp, ok := err.(*apitype.ErrorResponse); ok && errResp.Code == 401 {
|
|
|
|
return false, nil
|
|
|
|
}
|
2018-04-05 00:31:01 +02:00
|
|
|
return false, errors.Wrapf(err, "getting user info from %v", cloudURL)
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
}
|
Fix false-positives in login verification (#825)
Surprisingly `pulumi login -c https://google.com` would succeed. This was because we were too lax in our way of validating credentials. We take the provided cloud URL and call the "GetCurrentUserHandler" method. But we were only checking that it returned a successful response, not that it was actually valid JSON.
So in the "https://google.com" case, Google returned HTML describing a 404 error, but since the sever response was 200, the Pulumi CLI assumed things were on the up and up.
We now parse the response as JSON, and confirm the response has a `name` property that is non-nil. This heuristic covers the majority of false-positive cases, but without us needing to move all of the service's API shape for users, which includes organizations, which includes Clouds, etc. into `pulumi`.
Fixes https://github.com/pulumi/pulumi-service/issues/457. As an added bonus, we now return a much more useful error message.
2018-01-21 04:11:38 +01:00
|
|
|
|
Improve the overall cloud CLI experience
This improves the overall cloud CLI experience workflow.
Now whether a stack is local or cloud is inherent to the stack
itself. If you interact with a cloud stack, we transparently talk
to the cloud; if you interact with a local stack, we just do the
right thing, and perform all operations locally. Aside from sometimes
seeing a cloud emoji pop-up ☁️, the experience is quite similar.
For example, to initialize a new cloud stack, simply:
$ pulumi login
Logging into Pulumi Cloud: https://pulumi.com/
Enter Pulumi access token: <enter your token>
$ pulumi stack init my-cloud-stack
Note that you may log into a specific cloud if you'd like. For
now, this is just for our own testing purposes, but someday when we
support custom clouds (e.g., Enterprise), you can just say:
$ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873
The cloud is now the default. If you instead prefer a "fire and
forget" style of stack, you can skip the login and pass `--local`:
$ pulumi stack init my-faf-stack --local
If you are logged in and run `pulumi`, we tell you as much:
$ pulumi
Usage:
pulumi [command]
// as before...
Currently logged into the Pulumi Cloud ☁️
https://pulumi.com/
And if you list your stacks, we tell you which one is local or not:
$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT CLOUD URL
my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/
my-faf-stack n/a 0 n/a
And `pulumi stack` by itself prints information like your cloud org,
PPC name, and so on, in addition to the usuals.
I shall write up more details and make sure to document these changes.
This change also fairly significantly refactors the layout of cloud
versus local logic, so that the cmd/ package is resonsible for CLI
things, and the new pkg/backend/ package is responsible for the
backends. The following is the overall resulting package architecture:
* The backend.Backend interface can be implemented to substitute
a new backend. This has operations to get and list stacks,
perform updates, and so on.
* The backend.Stack struct is a wrapper around a stack that has
or is being manipulated by a Backend. It resembles our existing
Stack notions in the engine, but carries additional metadata
about its source. Notably, it offers functions that allow
operations like updating and deleting on the Backend from which
it came.
* There is very little else in the pkg/backend/ package.
* A new package, pkg/backend/local/, encapsulates all local state
management for "fire and forget" scenarios. It simply implements
the above logic and contains anything specific to the local
experience.
* A peer package, pkg/backend/cloud/, encapsulates all logic
required for the cloud experience. This includes its subpackage
apitype/ which contains JSON schema descriptions required for
REST calls against the cloud backend. It also contains handy
functions to list which clouds we have authenticated with.
* A subpackage here, pkg/backend/state/, is not a provider at all.
Instead, it contains all of the state management functions that
are currently shared between local and cloud backends. This
includes configuration logic -- including encryption -- as well
as logic pertaining to which stacks are known to the workspace.
This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
|
|
|
return true, nil
|
|
|
|
}
|