pulumi/pkg/workspace/workspace.go

194 lines
5.2 KiB
Go
Raw Normal View History

// Copyright 2016-2018, Pulumi Corporation. All rights reserved.
package workspace
import (
"crypto/sha1"
"encoding/hex"
"encoding/json"
"io/ioutil"
"os"
"os/user"
"path/filepath"
"strings"
"github.com/pkg/errors"
Suport workspace local configuration and use it by default Previously, we stored configuration information in the Pulumi.yaml file. This was a change from the old model where configuration was stored in a special section of the checkpoint file. While doing things this way has some upsides with being able to flow configuration changes with your source code (e.g. fixed values for a production stack that version with the code) it caused some friction for the local development scinerio. In this case, setting configuration values would pend changes to Pulumi.yaml and if you didn't want to publish these changes, you'd have to remember to remove them before commiting. It also was problematic for our examples, where it was not clear if we wanted to actually include values like `aws:config:region` in our samples. Finally, we found that for our own pulumi service, we'd have values that would differ across each individual dev stack, and publishing these values to a global Pulumi.yaml file would just be adding noise to things. We now adopt a hybrid model, where by default configuration is stored locally, in the workspace's settings per project. A new flag `--save` tests commands to actual operate on the configuration information stored in Pulumi.yaml. With the following change, we have have four "slots" configuration values can end up in: 1. In the Pulumi.yaml file, applies to all stacks 2. In the Pulumi.yaml file, applied to a specific stack 3. In the local workspace.json file, applied to all stacks 4. In the local workspace.json file, applied to a specific stack When computing the configuration information for a stack, we apply configuration in the above order, overriding values as we go along. We also invert the default behavior of the `pulumi config` commands so they operate on a specific stack (i.e. how they did before e3610989). If you want to apply configuration to all stacks, `--all` can be passed to any configuration command.
2017-10-27 23:24:47 +02:00
"github.com/pulumi/pulumi/pkg/resource/config"
"github.com/pulumi/pulumi/pkg/tokens"
"github.com/pulumi/pulumi/pkg/util/contract"
)
// W offers functionality for interacting with Pulumi workspaces.
type W interface {
Settings() *Settings // returns a mutable pointer to the optional workspace settings info.
Repository() *Repository // returns the repository this project belongs to.
StackPath(stack tokens.QName) string // returns the path to store stack information.
BackupDirectory() (string, error) // returns the directory to store backup stack files.
HistoryDirectory(stack tokens.QName) string // returns the directory to store a stack's history information.
Save() error // saves any modifications to the workspace.
}
type projectWorkspace struct {
name tokens.PackageName // the package this workspace is associated with.
project string // the path to the Pulumi.[yaml|json] file for this project.
settings *Settings // settings for this workspace.
repo *Repository // the repo this workspace is associated with.
}
Improve the overall cloud CLI experience This improves the overall cloud CLI experience workflow. Now whether a stack is local or cloud is inherent to the stack itself. If you interact with a cloud stack, we transparently talk to the cloud; if you interact with a local stack, we just do the right thing, and perform all operations locally. Aside from sometimes seeing a cloud emoji pop-up ☁️, the experience is quite similar. For example, to initialize a new cloud stack, simply: $ pulumi login Logging into Pulumi Cloud: https://pulumi.com/ Enter Pulumi access token: <enter your token> $ pulumi stack init my-cloud-stack Note that you may log into a specific cloud if you'd like. For now, this is just for our own testing purposes, but someday when we support custom clouds (e.g., Enterprise), you can just say: $ pulumi login --cloud-url https://corp.acme.my-ppc.net:9873 The cloud is now the default. If you instead prefer a "fire and forget" style of stack, you can skip the login and pass `--local`: $ pulumi stack init my-faf-stack --local If you are logged in and run `pulumi`, we tell you as much: $ pulumi Usage: pulumi [command] // as before... Currently logged into the Pulumi Cloud ☁️ https://pulumi.com/ And if you list your stacks, we tell you which one is local or not: $ pulumi stack ls NAME LAST UPDATE RESOURCE COUNT CLOUD URL my-cloud-stack 2017-12-01 ... 3 https://pulumi.com/ my-faf-stack n/a 0 n/a And `pulumi stack` by itself prints information like your cloud org, PPC name, and so on, in addition to the usuals. I shall write up more details and make sure to document these changes. This change also fairly significantly refactors the layout of cloud versus local logic, so that the cmd/ package is resonsible for CLI things, and the new pkg/backend/ package is responsible for the backends. The following is the overall resulting package architecture: * The backend.Backend interface can be implemented to substitute a new backend. This has operations to get and list stacks, perform updates, and so on. * The backend.Stack struct is a wrapper around a stack that has or is being manipulated by a Backend. It resembles our existing Stack notions in the engine, but carries additional metadata about its source. Notably, it offers functions that allow operations like updating and deleting on the Backend from which it came. * There is very little else in the pkg/backend/ package. * A new package, pkg/backend/local/, encapsulates all local state management for "fire and forget" scenarios. It simply implements the above logic and contains anything specific to the local experience. * A peer package, pkg/backend/cloud/, encapsulates all logic required for the cloud experience. This includes its subpackage apitype/ which contains JSON schema descriptions required for REST calls against the cloud backend. It also contains handy functions to list which clouds we have authenticated with. * A subpackage here, pkg/backend/state/, is not a provider at all. Instead, it contains all of the state management functions that are currently shared between local and cloud backends. This includes configuration logic -- including encryption -- as well as logic pertaining to which stacks are known to the workspace. This addresses pulumi/pulumi#629 and pulumi/pulumi#494.
2017-12-02 16:29:46 +01:00
// New creates a new workspace using the current working directory.
func New() (W, error) {
cwd, err := os.Getwd()
if err != nil {
return nil, err
}
return NewFrom(cwd)
}
// NewFrom creates a new Pulumi workspace in the given directory. Requires a Pulumi.yaml file be present in the
// folder hierarchy between dir and the .pulumi folder.
func NewFrom(dir string) (W, error) {
repo, err := GetRepository(dir)
if err != nil {
return nil, err
}
path, err := DetectProjectPathFrom(dir)
if err != nil {
return nil, err
} else if path == "" {
return nil, errors.New("no Pulumi.yaml project file found")
}
proj, err := LoadProject(path)
if err != nil {
return nil, err
}
w := projectWorkspace{
name: proj.Name,
project: path,
repo: repo,
}
err = w.readSettings()
if err != nil {
return nil, err
}
if w.settings.ConfigDeprecated == nil {
w.settings.ConfigDeprecated = make(map[tokens.QName]config.Map)
Suport workspace local configuration and use it by default Previously, we stored configuration information in the Pulumi.yaml file. This was a change from the old model where configuration was stored in a special section of the checkpoint file. While doing things this way has some upsides with being able to flow configuration changes with your source code (e.g. fixed values for a production stack that version with the code) it caused some friction for the local development scinerio. In this case, setting configuration values would pend changes to Pulumi.yaml and if you didn't want to publish these changes, you'd have to remember to remove them before commiting. It also was problematic for our examples, where it was not clear if we wanted to actually include values like `aws:config:region` in our samples. Finally, we found that for our own pulumi service, we'd have values that would differ across each individual dev stack, and publishing these values to a global Pulumi.yaml file would just be adding noise to things. We now adopt a hybrid model, where by default configuration is stored locally, in the workspace's settings per project. A new flag `--save` tests commands to actual operate on the configuration information stored in Pulumi.yaml. With the following change, we have have four "slots" configuration values can end up in: 1. In the Pulumi.yaml file, applies to all stacks 2. In the Pulumi.yaml file, applied to a specific stack 3. In the local workspace.json file, applied to all stacks 4. In the local workspace.json file, applied to a specific stack When computing the configuration information for a stack, we apply configuration in the above order, overriding values as we go along. We also invert the default behavior of the `pulumi config` commands so they operate on a specific stack (i.e. how they did before e3610989). If you want to apply configuration to all stacks, `--all` can be passed to any configuration command.
2017-10-27 23:24:47 +02:00
}
return &w, nil
}
func (pw *projectWorkspace) Settings() *Settings {
return pw.settings
}
func (pw *projectWorkspace) Repository() *Repository {
return pw.repo
}
func (pw *projectWorkspace) Save() error {
Suport workspace local configuration and use it by default Previously, we stored configuration information in the Pulumi.yaml file. This was a change from the old model where configuration was stored in a special section of the checkpoint file. While doing things this way has some upsides with being able to flow configuration changes with your source code (e.g. fixed values for a production stack that version with the code) it caused some friction for the local development scinerio. In this case, setting configuration values would pend changes to Pulumi.yaml and if you didn't want to publish these changes, you'd have to remember to remove them before commiting. It also was problematic for our examples, where it was not clear if we wanted to actually include values like `aws:config:region` in our samples. Finally, we found that for our own pulumi service, we'd have values that would differ across each individual dev stack, and publishing these values to a global Pulumi.yaml file would just be adding noise to things. We now adopt a hybrid model, where by default configuration is stored locally, in the workspace's settings per project. A new flag `--save` tests commands to actual operate on the configuration information stored in Pulumi.yaml. With the following change, we have have four "slots" configuration values can end up in: 1. In the Pulumi.yaml file, applies to all stacks 2. In the Pulumi.yaml file, applied to a specific stack 3. In the local workspace.json file, applied to all stacks 4. In the local workspace.json file, applied to a specific stack When computing the configuration information for a stack, we apply configuration in the above order, overriding values as we go along. We also invert the default behavior of the `pulumi config` commands so they operate on a specific stack (i.e. how they did before e3610989). If you want to apply configuration to all stacks, `--all` can be passed to any configuration command.
2017-10-27 23:24:47 +02:00
// let's remove all the empty entries from the config array
for k, v := range pw.settings.ConfigDeprecated {
Suport workspace local configuration and use it by default Previously, we stored configuration information in the Pulumi.yaml file. This was a change from the old model where configuration was stored in a special section of the checkpoint file. While doing things this way has some upsides with being able to flow configuration changes with your source code (e.g. fixed values for a production stack that version with the code) it caused some friction for the local development scinerio. In this case, setting configuration values would pend changes to Pulumi.yaml and if you didn't want to publish these changes, you'd have to remember to remove them before commiting. It also was problematic for our examples, where it was not clear if we wanted to actually include values like `aws:config:region` in our samples. Finally, we found that for our own pulumi service, we'd have values that would differ across each individual dev stack, and publishing these values to a global Pulumi.yaml file would just be adding noise to things. We now adopt a hybrid model, where by default configuration is stored locally, in the workspace's settings per project. A new flag `--save` tests commands to actual operate on the configuration information stored in Pulumi.yaml. With the following change, we have have four "slots" configuration values can end up in: 1. In the Pulumi.yaml file, applies to all stacks 2. In the Pulumi.yaml file, applied to a specific stack 3. In the local workspace.json file, applied to all stacks 4. In the local workspace.json file, applied to a specific stack When computing the configuration information for a stack, we apply configuration in the above order, overriding values as we go along. We also invert the default behavior of the `pulumi config` commands so they operate on a specific stack (i.e. how they did before e3610989). If you want to apply configuration to all stacks, `--all` can be passed to any configuration command.
2017-10-27 23:24:47 +02:00
if len(v) == 0 {
delete(pw.settings.ConfigDeprecated, k)
Suport workspace local configuration and use it by default Previously, we stored configuration information in the Pulumi.yaml file. This was a change from the old model where configuration was stored in a special section of the checkpoint file. While doing things this way has some upsides with being able to flow configuration changes with your source code (e.g. fixed values for a production stack that version with the code) it caused some friction for the local development scinerio. In this case, setting configuration values would pend changes to Pulumi.yaml and if you didn't want to publish these changes, you'd have to remember to remove them before commiting. It also was problematic for our examples, where it was not clear if we wanted to actually include values like `aws:config:region` in our samples. Finally, we found that for our own pulumi service, we'd have values that would differ across each individual dev stack, and publishing these values to a global Pulumi.yaml file would just be adding noise to things. We now adopt a hybrid model, where by default configuration is stored locally, in the workspace's settings per project. A new flag `--save` tests commands to actual operate on the configuration information stored in Pulumi.yaml. With the following change, we have have four "slots" configuration values can end up in: 1. In the Pulumi.yaml file, applies to all stacks 2. In the Pulumi.yaml file, applied to a specific stack 3. In the local workspace.json file, applied to all stacks 4. In the local workspace.json file, applied to a specific stack When computing the configuration information for a stack, we apply configuration in the above order, overriding values as we go along. We also invert the default behavior of the `pulumi config` commands so they operate on a specific stack (i.e. how they did before e3610989). If you want to apply configuration to all stacks, `--all` can be passed to any configuration command.
2017-10-27 23:24:47 +02:00
}
}
settingsFile := pw.settingsPath()
Implement dependency versions This change implements dependency versions, including semantic analysis, per the checkin https://github.com/marapongo/mu/commit/83030685c3b8a3dbe96bd10ab055f029667a96b0. There's quite a bit in here but at a top-level this parses and validates dependency references of the form [[proto://]base.url]namespace/.../name[@version] and verifies that the components are correct, as well as binding them to symbols. These references can appear in two places at the moment: * Service types. * Cluster dependencies. As part of this change, a number of supporting changes have been made: * Parse Workspaces using a full-blown parser, parser analysis, and semantic analysis. This allows us to share logic around the validation of common AST types. This also moves some of the logic around loading workspace.yaml files back to the parser, where it can be unified with the way we load Mu.yaml files. * New ast.Version and ast.VersionSpec types. The former represents a precise version -- either a specific semantic version or a short or long Git SHA hash -- and the latter represents a range -- either a Version, "latest", or a semantic range. * New ast.Ref and ast.RefParts types. The former is an unparsed string that is thought to contain a Ref, while the latter is a validated Ref that has been parsed into its components (Proto, Base, Name, and Version). * Added some type assertions to ensure certain structs implement certain interfaces, to speed up finding errors. (And remove the coercions that zero-fill vtbl slots.) * Be consistent about prefixing error types with Error or Warning. * Organize the core compiler driver's logic into three methods, FE, sema, and BE. * A bunch of tests for some of the above ... more to come in an upcoming change.
2016-11-23 01:58:23 +01:00
// ensure the path exists
err := os.MkdirAll(filepath.Dir(settingsFile), 0700)
if err != nil {
return err
}
b, err := json.MarshalIndent(pw.settings, "", " ")
if err != nil {
return err
}
return ioutil.WriteFile(settingsFile, b, 0600)
}
func (pw *projectWorkspace) StackPath(stack tokens.QName) string {
path := filepath.Join(pw.Repository().Root, StackDir, pw.name.String())
if stack != "" {
path = filepath.Join(path, qnamePath(stack)+".json")
}
return path
}
func (pw *projectWorkspace) BackupDirectory() (string, error) {
user, err := user.Current()
if user == nil || err != nil {
return "", errors.New("failed to get current user")
}
projectDir := filepath.Dir(pw.project)
projectBackupDirName := filepath.Base(projectDir) + "-" + sha1HexString(projectDir)
return filepath.Join(user.HomeDir, BookkeepingDir, BackupDir, projectBackupDirName), nil
}
func (pw *projectWorkspace) HistoryDirectory(stack tokens.QName) string {
path := filepath.Join(pw.Repository().Root, HistoryDir, pw.name.String())
if stack != "" {
return filepath.Join(path, qnamePath(stack))
}
return path
}
func (pw *projectWorkspace) readSettings() error {
settingsPath := pw.settingsPath()
b, err := ioutil.ReadFile(settingsPath)
if err != nil && os.IsNotExist(err) {
// not an error to not have an existing settings file.
pw.settings = &Settings{}
return nil
} else if err != nil {
return err
}
var settings Settings
err = json.Unmarshal(b, &settings)
if err != nil {
return err
}
pw.settings = &settings
return nil
}
func (pw *projectWorkspace) settingsPath() string {
return filepath.Join(pw.Repository().Root, WorkspaceDir, pw.name.String(), WorkspaceFile)
}
// sha1HexString returns a hex string of the sha1 hash of value.
func sha1HexString(value string) string {
h := sha1.New()
_, err := h.Write([]byte(value))
contract.AssertNoError(err)
return hex.EncodeToString(h.Sum(nil))
}
// qnameFileName takes a qname and cleans it for use as a filename (by replacing tokens.QNameDelimter with a dash)
func qnameFileName(nm tokens.QName) string {
return strings.Replace(string(nm), tokens.QNameDelimiter, "-", -1)
}
// qnamePath just cleans a name and makes sure it's appropriate to use as a path.
func qnamePath(nm tokens.QName) string {
return stringNamePath(string(nm))
}
// stringNamePart cleans a string component of a name and makes sure it's appropriate to use as a path.
func stringNamePath(nm string) string {
return strings.Replace(nm, tokens.QNameDelimiter, string(os.PathSeparator), -1)
}