Giter Club home page Giter Club logo

pulumi-gcp's Introduction

Slack GitHub Discussions NPM version Python version NuGet version GoDoc License Gitpod ready-to-code

Pulumi's Infrastructure as Code SDK is the easiest way to build and deploy infrastructure, of any architecture and on any cloud, using programming languages that you already know and love. Code and ship infrastructure faster with your favorite languages and tools, and embed IaC anywhere with Automation API.

Simply write code in your favorite language and Pulumi automatically provisions and manages your resources on AWS, Azure, Google Cloud Platform, Kubernetes, and 120+ providers using an infrastructure-as-code approach. Skip the YAML, and use standard language features like loops, functions, classes, and package management that you already know and love.

For example, create three web servers:

const aws = require("@pulumi/aws");
const sg = new aws.ec2.SecurityGroup("web-sg", {
    ingress: [{ protocol: "tcp", fromPort: 80, toPort: 80, cidrBlocks: ["0.0.0.0/0"] }],
});
for (let i = 0; i < 3; i++) {
    new aws.ec2.Instance(`web-${i}`, {
        ami: "ami-7172b611",
        instanceType: "t2.micro",
        vpcSecurityGroupIds: [sg.id],
        userData: `#!/bin/bash
            echo "Hello, World!" > index.html
            nohup python -m SimpleHTTPServer 80 &`,
    });
}

Or a simple serverless timer that archives Hacker News every day at 8:30AM:

const aws = require("@pulumi/aws");

const snapshots = new aws.dynamodb.Table("snapshots", {
    attributes: [{ name: "id", type: "S", }],
    hashKey: "id", billingMode: "PAY_PER_REQUEST",
});

aws.cloudwatch.onSchedule("daily-yc-snapshot", "cron(30 8 * * ? *)", () => {
    require("https").get("https://news.ycombinator.com", res => {
        let content = "";
        res.setEncoding("utf8");
        res.on("data", chunk => content += chunk);
        res.on("end", () => new aws.sdk.DynamoDB.DocumentClient().put({
            TableName: snapshots.name.get(),
            Item: { date: Date.now(), content },
        }).promise());
    }).end();
});

Many examples are available spanning containers, serverless, and infrastructure in pulumi/examples.

Pulumi is open source under the Apache 2.0 license, supports many languages and clouds, and is easy to extend. This repo contains the pulumi CLI, language SDKs, and core Pulumi engine, and individual libraries are in their own repos.

Welcome

  • Get Started with Pulumi: Deploy a simple application in AWS, Azure, Google Cloud, or Kubernetes using Pulumi.

  • Learn: Follow Pulumi learning pathways to learn best practices and architectural patterns through authentic examples.

  • Examples: Browse several examples across many languages, clouds, and scenarios including containers, serverless, and infrastructure.

  • Docs: Learn about Pulumi concepts, follow user-guides, and consult the reference documentation.

  • Registry: Find the Pulumi Package with the resources you need. Install the package directly into your project, browse the API documentation, and start building.

  • Pulumi Roadmap: Review the planned work for the upcoming quarter and a selected backlog of issues that are on our mind but not yet scheduled.

  • Community Slack: Join us in Pulumi Community Slack. All conversations and questions are welcome.

  • GitHub Discussions: Ask questions or share what you're building with Pulumi.

Getting Started

Watch the video

See the Get Started guide to quickly get started with Pulumi on your platform and cloud of choice.

Otherwise, the following steps demonstrate how to deploy your first Pulumi program, using AWS Serverless Lambdas, in minutes:

  1. Install:

    To install the latest Pulumi release, run the following (see full installation instructions for additional installation options):

    $ curl -fsSL https://get.pulumi.com/ | sh
  2. Create a Project:

    After installing, you can get started with the pulumi new command:

    $ mkdir pulumi-demo && cd pulumi-demo
    $ pulumi new hello-aws-javascript

    The new command offers templates for all languages and clouds. Run it without an argument and it'll prompt you with available projects. This command created an AWS Serverless Lambda project written in JavaScript.

  3. Deploy to the Cloud:

    Run pulumi up to get your code to the cloud:

    $ pulumi up

    This makes all cloud resources needed to run your code. Simply make edits to your project, and subsequent pulumi ups will compute the minimal diff to deploy your changes.

  4. Use Your Program:

    Now that your code is deployed, you can interact with it. In the above example, we can curl the endpoint:

    $ curl $(pulumi stack output url)
  5. Access the Logs:

    If you're using containers or functions, Pulumi's unified logging command will show all of your logs:

    $ pulumi logs -f
  6. Destroy your Resources:

    After you're done, you can remove all resources created by your program:

    $ pulumi destroy -y

To learn more, head over to pulumi.com for much more information, including tutorials, examples, and details of the core Pulumi CLI and programming model concepts.

Platform

Languages

Language Status Runtime Versions
JavaScript Stable Node.js Current, Active and Maintenance LTS versions
TypeScript Stable Node.js Current, Active and Maintenance LTS versions
Python Stable Python Supported versions
Go Stable Go Supported versions
.NET (C#/F#/VB.NET) Stable .NET Supported versions
Java Public Preview JDK 11+
YAML Stable n/a n/a

EOL Releases

The Pulumi CLI v1 and v2 are no longer supported. If you are not yet running v3, please consider migrating to v3 to continue getting the latest and greatest Pulumi has to offer! πŸ’ͺ

Clouds

Visit the Registry for the full list of supported cloud and infrastructure providers.

Contributing

Visit CONTRIBUTING.md for information on building Pulumi from source or contributing improvements.

pulumi-gcp's People

Contributors

aaronfriel avatar aq17 avatar cnunciato avatar cyrusnajmabadi avatar danielrbradley avatar ellismg avatar evanboyle avatar guineveresaenger avatar iwahbe avatar jaxxstorm avatar jen20 avatar jkodroff avatar joeduffy avatar justinvp avatar komalali avatar kpitzen avatar lblackstone avatar lukehoban avatar mikhailshilkov avatar pgavlin avatar pulumi-bot avatar rquitales avatar squaremo avatar stack72 avatar subhabh avatar swgillespie avatar t0yv0 avatar thomas11 avatar venelinmartinov avatar viveklak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pulumi-gcp's Issues

crash while importing gke NodePool: panic: runtime error: index out of range goroutine 91 [running]:

Pulumi crashes when attempting to import a NodePool via gcp.container.NodePool.get.

Code to reproduce:

const defaultPool = gcp.container.NodePool.get('default-pool', 'default-pool', {
  cluster: 'my-cluster'
  name: 'default-pool',
  zone: 'us-west1'
});

crash logs

  panic: runtime error: index out of range
    goroutine 91 [running]:
    github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google.getNodePoolName(0xc420956030, 0xc, 0xc420728810, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google/resource_container_node_pool.go:749 +0x7b
    github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google.resourceContainerNodePoolExists(0xc42013c4d0, 0x27b8360, 0xc42008c160, 0xc42013c4d0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google/resource_container_node_pool.go:393 +0xc0
    github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(0xc4203b0770, 0xc420912050, 0x27b8360, 0xc42008c160, 0xc4203f04a8, 0x1, 0x2a137a0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:329 +0x36f
    github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema.(*Provider).Refresh(0xc4203e4460, 0xc42090d5c0, 0xc420912050, 0xc4203e1ef0, 0x0, 0xc420764000)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema/provider.go:308 +0x9a
    github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi-terraform/pkg/tfbridge.(*Provider).Read(0xc420351680, 0x38f4620, 0xc4207281e0, 0xc420912000, 0xc420351680, 0x1, 0x1)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi-terraform/pkg/tfbridge/provider.go:517 +0x5f5
    github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x38f4620, 0xc4207281e0, 0x2a70960, 0xc420912000, 0x38f4620, 0xc4207281e0, 0x38faac0, 0x39a9b10)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1247 +0x86
    github.com/pulumi/pulumi-gcp/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x38f4620, 0xc4207281e0, 0x2a70960, 0xc420912000, 0xc42094a0a0, 0xc42094a0c0, 0x0, 0x0, 0x38dfde0, 0xc420427580)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:57 +0x2d7
    github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x2b4b4c0, 0xc420351680, 0x38f4620, 0xc420728060, 0xc42013c070, 0xc42043a700, 0x0, 0x0, 0xc420558a50, 0x1261747)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1249 +0x16d
    github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690, 0xc4205650b0, 0x397b1f8, 0x0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:1011 +0x50b
    github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:1249 +0x1528
    github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc420482050, 0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690)
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:680 +0x9f
    created by github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
        /home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:678 +0xa1

    error: Running program '/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b' failed with an unhandled exception:
    error: Error: invocation of gcp:container/getCluster:getCluster returned an error: transport is closing
        at monitor.invoke (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/@solvvy/pulumi-util/node_modules/@pulumi/pulumi/runtime/invoke.js:72:33)
        at Object.onReceiveStatus (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:1189:9)
        at InterceptingListener._callNext (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:564:42)
        at InterceptingListener.onReceiveStatus (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:614:8)
        at callback (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:841:24)

  gcp:container:NodePool (default-pool):
    error: Plan apply failed: transport is closing

  gcp:container:Cluster (api-cluster):
    error: Plan apply failed: all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :64549: connect: connection refused"

versions:

   "@pulumi/gcp": "0.16.1",
    "@pulumi/kubernetes": "0.17.3",
    "@pulumi/pulumi": "0.16.1"

pulumi CLI: 0.16.1

After creating GKE cluster, calling Pulumi Refresh causes a recreate on the cluster/node pools without other changes.

After I run pulumi up and create:
1 network
1 subnetwork (with multiple cidrs)
1 GKE cluster
2 Node Pools

calling pulumi refresh followed by pulumi up recreates the cluster and node pools, makes additional changes to the subnetwork.

config.ts:

import { Config } from "@pulumi/pulumi";
import * as pulumi from "@pulumi/pulumi";
const config = new Config();

export const name = pulumi.getStack();
export const maintenanceWindow = config.get("startTime") || "09:00";
export const gkeVersion = config.get("gkeVersion") || "1.12.6-gke.7";
export const mainCIDR = config.get("mainCIDR") || "10.0.0.0/16";
export const clusterCIDR = config.get("clusterCIDR") || "10.1.0.0/16";
export const serviceCIDR = config.get("serviceCIDR") || "10.2.0.0/16";
export const coreMin = config.getNumber("coreMin") || 2;
export const coreInitial = config.getNumber("coreInitial") || 2;
export const coreType = config.get("coreType") || "n1-standard-1";
export const coreMax = config.getNumber("coreMax") || 10;
export const coreDisk = config.getNumber("coreDisk") || 100;
export const siteMin= config.getNumber("siteMin") || 1;
export const siteInitial = config.getNumber("siteInitial") || 1;
export const siteType = config.get("siteType") || "n1-standard-1";
export const siteMax = config.getNumber("siteMax") || 10;
export const siteDisk = config.getNumber("siteDisk") || 100;

index.ts:

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

import {
    name,
    maintenanceWindow,
    gkeVersion,
    coreInitial,
    coreMax,
    coreMin,
    coreType,
    coreDisk,
    siteInitial,
    siteMax,
    siteMin,
    siteType,
    siteDisk,
    mainCIDR,
    serviceCIDR,
    clusterCIDR
} from "./config";

export const vpc = new gcp.compute.Network(name, {
    name: name,
    autoCreateSubnetworks: false
});

const subnet = new gcp.compute.Subnetwork(name, {
    name: name,
    network: vpc.selfLink,
    ipCidrRange: mainCIDR,
    secondaryIpRanges: [
        {
            ipCidrRange: serviceCIDR,
            rangeName: "services",
        },
        {
            ipCidrRange: clusterCIDR,
            rangeName: "pods",
        }]
}, {
    dependsOn: vpc,
    parent: vpc
});

const k8sCluster = new gcp.container.Cluster("gke-cluster",
    {
        name: name,
        network: vpc.selfLink,
        subnetwork: subnet.selfLink,
        minMasterVersion: gkeVersion,
        nodeVersion: gkeVersion,
        loggingService: "logging.googleapis.com/kubernetes",
        monitoringService: "monitoring.googleapis.com/kubernetes",
        addonsConfig: {
            httpLoadBalancing: {
                disabled: false
            }
        },
        maintenancePolicy: {
            dailyMaintenanceWindow: {
                startTime: maintenanceWindow
            }
        },
        ipAllocationPolicy: {
            servicesSecondaryRangeName: "services",
            clusterSecondaryRangeName: "pods"
        },
        networkPolicy: {
            provider: "CALICO",
            enabled: true
        },
        nodePools: [{name: "default-pool",}],
        removeDefaultNodePool: true,
    },
    {
        deleteBeforeReplace: true,
        parent: subnet,
        dependsOn: subnet
    }
);
//Setup NodePools
export const corePool = new gcp.container.NodePool("corePool", {
    name: "core",
    cluster: k8sCluster.name,
    initialNodeCount: coreInitial,
    nodeConfig: {
        machineType: coreType,
        diskSizeGb: coreDisk,
        diskType: "pd-standard",
        oauthScopes: [
            "https://www.googleapis.com/auth/cloud-platform"
        ],
        imageType: "COS",
    },
    autoscaling: {
        maxNodeCount: coreMax,
        minNodeCount: coreMin
    },
    management: {
        autoUpgrade: true,
        autoRepair: true
    },
    version: gkeVersion
}, {
    deleteBeforeReplace: true,
    parent: k8sCluster,
    dependsOn: k8sCluster
});

const sitePool = new gcp.container.NodePool("sitePool", {
    name: "sites",
    cluster: k8sCluster.name,
    initialNodeCount: siteInitial,
    nodeConfig: {
        machineType: siteType,
        diskSizeGb: siteDisk,
        diskType: "pd-standard",
        oauthScopes: [
            "https://www.googleapis.com/auth/cloud-platform"
        ],
        imageType: "COS",
    },
    autoscaling: {
        minNodeCount: siteMin,
        maxNodeCount: siteMax
    },
    management: {
        autoRepair: true,
        autoUpgrade: true
    },
    version: gkeVersion
},{
    parent: k8sCluster,
    deleteBeforeReplace: true,
});

Refresh on existing resources:

 ~         └─ gcp:container:Cluster           gke-cluster    update     [diff: +nodeConfig~instanceGroupUrls,nodePools]
              β”œβ”€ gcp:container:NodePool       corePool                  
              β”œβ”€ pulumi:providers:kubernetes  gkeK8s                    
              └─ gcp:container:NodePool       sitePool                  

Update Run

  +-        └─ gcp:container:Cluster           gke-cluster    replace     [diff: -additionalZones,clusterAutoscaling,clusterIpv4Cidr,defaultMaxPodsPerNode,masterAuth,nodeConfig,project,zone~addonsConfig,ipAllocationPolicy,maintenancePolicy,network,nodePools,subnetw
 +-           β”œβ”€ gcp:container:NodePool       sitePool       replace     [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig]
 +-           β”œβ”€ gcp:container:NodePool       corePool       replace     [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig]
 ~            └─ pulumi:providers:kubernetes  gkeK8s         update      [
 diff: ~kubeconfig]

Refresh after "Console Upgrade of cluster"

~         └─ gcp:container:Cluster           gke-cluster    update     [diff: +nodeConfig~instanceGroupUrls,masterVersion,nodePools,nodeVersion]
              β”œβ”€ pulumi:providers:kubernetes  gkeK8s                    
 ~            β”œβ”€ gcp:container:NodePool       sitePool       update     [diff: ~nodeCount,version]
 ~            └─ gcp:container:NodePool       corePool       update     [diff: ~version]

 Update Plan:
      pulumi:pulumi:Stack                      cluster-repro              
     └─ gcp:compute:Network                   repro                      [diff: -description,ipv4Range,project,routingMode]
        └─ gcp:compute:Subnetwork             repro                      [diff: -description,enableFlowLogs,privateIpGoogleAccess,project,region]
 +-        └─ gcp:container:Cluster           gke-cluster    replace     [diff: -additionalZones,clusterAutoscaling,clusterIpv4Cidr,defaultMaxPodsPerNode,masterAuth,nodeConfig,project,zone~addonsConfig,ipAllocationPolicy,maintenancePolicy,network,nodePools,nodeVer
 +-           β”œβ”€ gcp:container:NodePool       corePool       replace     [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig,version]
 +-           β”œβ”€ gcp:container:NodePool       sitePool       replace     [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig,version]
 ~            └─ pulumi:providers:kubernetes  gkeK8s         update      [diff: ~kubeconfig]

When getting an existing cluster, there isn't a way to pull the kubeconfig

In the current cluster object, we can use the cluster props: certificateAuthority, name, and endpoint to fill in the blanks in the kubeconfig file https://gist.github.com/b01eeecacccc3e284771463ed626af5e

we only do this right now in the object constructor, when we should actually have a kubeconfig() method on the object.

see community slack: https://pulumi-community.slack.com/archives/C84L4E3N1/p1550871454193500?thread_ts=1550862777.160400&cid=C84L4E3N1

related: pulumi/pulumi-aws#478

gcp.storage.BucketObject does not refresh an object if using the source parameter instead of content

Discussion on this matter with @clstokes on slack here : https://pulumi-community.slack.com/archives/C84L4E3N1/p1548693619818500

Problem

When creating an object with the source parameter on a gcp.storage.BucketObject pulumi (or the provider) does not detect a change on the local file and thus does not update the related bucket object.

How to reproduce

  1. Create an object in a gcp bucket referencing a local file :
new gcp.storage.BucketObject('my-object', {
  bucket: 'bucket-name',
  name: 'my-object',
  source: 'absolute/path/to/file.ext',
});
  1. pulumi up succeeds to create the object in the bucket
  2. Update the local file content (name and path stay unchanged, only the md5 is different)
  3. Run pulumi up again -> no change detected

Unable to create regional cluster

@pulumi/[email protected]

export const k8sCluster = new gcp.container.Cluster('gke-cluster', {
  initialNodeCount: 1,
  nodeVersion: 'latest',
  minMasterVersion: 'latest',
  nodeConfig: {
    machineType: 'n1-standard-1',
    oauthScopes: [
      'https://www.googleapis.com/auth/compute',
      'https://www.googleapis.com/auth/devstorage.read_only',
      'https://www.googleapis.com/auth/logging.write',
      'https://www.googleapis.com/auth/monitoring',
    ],
  },

  // regional cluster
  location: 'us-central1',
  nodeLocations: ['us-central1-f', 'us-central1-b'],
})
  gcp:container:Cluster (gke-cluster):
    error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: location
    error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: node_locations

This should be possible based on #119

Support PR builds from forks

Right now here (as in #134) we see forks fail to build as the secret material is not available to decrypt and openssl fails early on.

Missing configuration leads to poor errors

If I neglect to set my project, region, or zone configuration, I get an unhelpful error message:

error: Plan apply failed: project: required field is not set

Given that this will happen almost guaranteed for every new user of Pulumi on GCP, we should try to make this experience better. IIRC, we have some trick that we already use in the AWS package to hook errors like this and prettify them before displaying to the end user.

`Invalid address to set: []string{"cluster_autoscaling"}` with `0.16.4`

Users of the latest release of this provider have noted that they see errors when creating gcp.container.Cluster resources:

Invalid address to set: []string{"cluster_autoscaling"}

This will happen even after downgrading back to the 0.16.3 version of the NPM package because it affects the provider binary itself which will always load latest. pulumi plugin rm resource gcp 0.16.4 can likely fix this in the short term.

Enabling compute service in project before using getZones.

I need to get the available zones in a region provided that the compute service has already being enabled in the project.

const services = new gcp.projects.Services('services', {
    services: ['compute.googleapis.com'],
});

const available = pulumi.output(gcp.compute.getZones({}));

How can I make sure that getZones runs once the service has been enabled. I don't see a way of handling the dependency.

Thx

GCP node.js provider defaults to automaticRestart=false for VMs

Test driving Pulumi on GCP, I used the following config:

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const vm = new gcp.compute.Instance("pulumi-test", {
    bootDisk: {
        initializeParams: {
            image: "cos-cloud/cos-stable",
        },
    },
    deletionProtection: true,
    machineType: "g1-small",
    networkInterfaces: [{
        accessConfigs: [{}],
        network: "default",
    }],
    zone: "us-central1-a",
    name: "pulumi-test",
});

Upon applying this config, I end up with a GCE VM that has automatic restarts turned off. Specifically the property scheduling.automaticRestart is false.

The default for GCE is true, meaning that if the platform has to halt the VM for whatever reason it'll restart it. This feels like a landmine in Pulumi, where it's deviating from the platform default settings.

I believe the default should be to set automaticRestart=true, to match the platform's recommended default. WDYT?

Provider is removing roles from Pulumi and non-Pulumi-managed accounts

From the community slack:

In prototyping I am tearing down app and infrastructure (cluster, db) stacks, but leaving up the identity stack (gcp). It seems roles are disappearing from BOTH pulumi related gcp service accounts, and NON-pulumi related service accounts. We have not seen this issue prior to pulumi and so I am correlating it with my activity in the same gcp project.

https://gist.github.com/rosskevin/00f05766829a9b45888c508949399f0a

One [role assignment] was pulumi, one was:

gcloud iam service-accounts create ${BUILD_SA} \
    --project=${GOOGLE_CLOUD_PROJECT} \
    --display-name ${BUILD_SA}

gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
    --role roles/storage.admin \
    --member serviceAccount:${BUILD_SA_EMAIL}

Unhelpful message when role ID has a `-`

When I create this role:

export const testCiRole = new gcp.projects.IAMCustomRole(config.appName, {
    roleId: "test-ci",
    title: "Test CI role",
    project: config.project,
    permissions: [...]
})

I get this error:

    error: Plan apply failed: Unable to verify whether custom project role projects/pulumi-development/roles/test-ci already exists and must be undeleted: Error reading Custom Project Role "projects/pulumi-development/roles/test-ci": googleapi: Error 400: The role name must be in the form "roles/{role}", "organizations/{organization_id}/roles/{role}", or "projects/{project_id}/roles/{role}"., badRequest

The problem is actually that there is a - in the roleId. If you use testci instead, it works.

"unexpected archive source" while trying to create BucketObject

The following code:

let bucket = new gcp.storage.Bucket("bucket", {});
let data = new gcp.storage.BucketObject("object", {
    bucket: bucket.name,
    source: new pulumi.asset.AssetArchive({
        ".": new pulumi.asset.FileArchive("./javascript"),
    }),
});

fails to be run by pulumi update:

Previewing changes:

     Type                   Name                         Plan       Info
 +   pulumi:pulumi:Stack    gcp-blablabla-gcp-blablabla  create     1 error
 +   └─ gcp:storage:Bucket  bucket                       create

Diagnostics:
  pulumi:pulumi:Stack: gcp-blablabla-gcp-blablabla
    error: unexpected archive source

error: an error occurred while advancing the preview

Provisioning error shows incorrect project number

When trying to bring up a stack that included a gcp.sql.DatabaseInstance, I got this error:

Plan apply failed: Error, failed to create instance unleash: googleapi: Error 403: Access Not Configured. Cloud SQL Admin API has not been used in project 563584335869 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=563584335869 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry., accessNotConfigured

I do have sqladmin.googleapis.com enabled, but 563584335869 is not my project.

`get` seems to require name in too many places

This does not work:

gcp.container.Cluster.get("<cluster-name>", "<cluster-name>", {
    project: "<project-name>"
});

but this does:

gcp.container.Cluster.get("<cluster-name>", "<cluster-name>", {
    name: "<cluster-name>",
    project: "<project-name>"
});

this is confusing -- why is name required. Is this the desired behavior?

Add randomness to service account and IAM IDs

There are several properties in the GCP provider that must be globally unique but aren't name and so don't enjoy our auto-naming suffixing. For instance, gcp.projects.IAMCustomRole has roleId and gcp.serviceAccount.Account has accountId. Although it's possible to use something like our @pulumi/random package to add some safe randomness to the names, it'd be nicer if it happened automatically.

Terraform-provider-google

Pulumi fork of Terraform-provider-google obliterates the vendor tree and gets the vendor based on the latest iterations of the dependencies without any constraints. This diverges quite a bit with released version and could be potentially a problem merging back and/or integrating further changes from the master. We need to think through what is the best way to address this.

googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden

Use Case:

Create a bucket in GCP to host a static website.

Details:

GCP requires that the bucket should be named as the domain name and that we create a CNAME record in the DNS to point it to google store. I am able to manually create the bucket from GCP Console, with the domain name say, for example, www.myawesomesite.com. However, when I try to create the same programmatically using Pulumi, I end up getting the following error.

NOTE: I am the verified owner of the domain, the domain itself is also verified and I have also already added Pulumi's service account email as the domain owner in Webmaster tools - https://www.google.com/webmasters/tools/dashboard.

Code:

const pulumi = require("@pulumi/pulumi");
const gcp = require('@pulumi/gcp');
const mime = require('mime');
const fs = require('fs');
const path = require('path');

const siteDir = path.join(__dirname, 'www');
const siteBucket = new gcp.storage.Bucket('pulumi-demo-bucket', {
    name: 'www.myawesomesite.com',
    websites: [
        {
            mainPageSuffix: 'index.html',
            notFoundPage: '404.html'
        }
    ]
});
const defaultAcl = new gcp.storage.BucketACL('pulumi-demo-acl', {
    bucket: siteBucket,
    defaultAcl: 'publicRead'
});


// For each file in the directory, create an object stored in `siteBucket`
fs.readdirSync(siteDir)
    .forEach(item => {
        let filePath = path.join(siteDir, item);
        let object = new gcp.storage.BucketObject(item, {
            bucket: siteBucket,
            source: filePath,
            contentType: mime.getType(filePath) || undefined,
        });
    });

// Stack exports
exports.bucketName = siteBucket.name;

ERROR:

$ pulumi update
Previewing update of stack 'plume-demo'
Previewing changes:

 *  pulumi:pulumi:Stack pulumi-demo-plume-demo running
 +  gcp:storage:Bucket pulumi-demo-bucket create
 +  gcp:storage:BucketACL pulumi-demo-acl create
 +  gcp:storage:BucketObject favicon.png create
 +  gcp:storage:BucketObject index.html create

info: 4 changes previewed:
    + 4 resources to create
      1 resource unchanged

Updating stack 'plume-demo'
Performing changes:

 *  pulumi:pulumi:Stack pulumi-demo-plume-demo running
 +  gcp:storage:Bucket pulumi-demo-bucket creating
 +  gcp:storage:Bucket pulumi-demo-bucket creating 1 error. error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
 +  gcp:storage:Bucket pulumi-demo-bucket **creating failed** 1 error. error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
 +  gcp:storage:Bucket pulumi-demo-bucket **creating failed** 2 errors. error: update failed
 *  pulumi:pulumi:Stack pulumi-demo-plume-demo done

Diagnostics:
  gcp:storage:Bucket: pulumi-demo-bucket
    error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden

    error: update failed

info: no changes required:
      1 resource unchanged

Bucket's created on behalf of users should try to adhere to name restrictions

If I do:

bucket.onObjectFinalized("newObject", async (data) => {
    console.log(data);
});

I get:

error: Plan apply failed: googleapi: Error 400: Invalid bucket name: 'newObject-f691b50', invalid

It turns out these names can only include lowercase letters (among other resrictions at https://cloud.google.com/storage/docs/naming), though the error message unfortunately makes this unclear.

We may need to lowercase the name we pass along on the users' behalf here, and may also want to enforce the 63 character limit on the underlying Bucket name so that our addition of some random hex doesn't accidentally trigger issues.

consider adding support for terraform provider google beta

It came recently to my attention that google is apparently planning to branch very soon (2.0.0 release) the google terraform provider into 2 flavors (https://www.terraform.io/docs/providers/google/provider_versions.html). One regular (stable) and one beta provider. The difference between the two is that the regular one only supports GCP APIs which are marked as stable whereas the beta one also supports beta APIs and features.
Singe google has a philosophy of keeping things pretty long in beta, the 1.x.x provider contains a bunch of beta features that are already used by customers in production (Personally we use gke nodepool taints, which are considered beta by google and will be removed in the future provider version).

While this is currently not an issue (since pulumi adopts the 1.x.x version of the terraform provider) I think this may be come an issue in near future and I wanted to bring this to pulumi's attention early enough. I'm not sure what's the right solution but it certainly would be good if pulumi customers have the choice to only use stable or beta google apis, just like they would have when using terraform. This may require creating 2 pulumi gcp flavors as well.

On another note: Google recently moved to auto-generate most of the google terraform provider via some meta module called 'magic-modules': https://github.com/GoogleCloudPlatform/magic-modules/ . The idea of magic-modules is in my understanding to auto-generate SDKs for different languages and platforms (currently terraform, ansible, puppet, chef). Mid term this might be an opportunity for pulumi to auto-generate the pulumi gcp module directly based on the magic-modules instead of the terraform provider.

gcp.compute.Network.get is successful even when it does not exist

export const fooNetwork = gcp.compute.Network.get('foo', 'foo', { project })

returns:

Outputs:
  + fooNetwork: {
      + id     : "foo"
      + project: "acme"
      + urn    : "urn:pulumi:development::acme-infrastructure::gcp:compute/network:Network::foo"
    }

The project exists but the network does not exist. I would expect this to fail to read.

DatabaseInstance replace logic seems wrong

Modifying the ipConfiguration setting for an existing DatabaseInstance triggers a replace which obviously results in data loss:

const instance = new gcp.sql.DatabaseInstance("master", {
    databaseVersion: "POSTGRES_9_6",
    settings: {
        tier: "db-f1-micro",
        ipConfiguration: {
            authorizedNetworks: [
                {
                    name: "VPN",
                    value: "1.2.3.4",
                },
            ],
        },
    },
});

I would expect this to simply update the existing resource's authorised networks.

Documentation missing on how to use some resources in separate projects/zones

Edit: There is a way to do this using a custom provider, as described in the comments below.

When creating a project-heterogeneous stack (for simplified IAM or other reasons), it would be nice to be able to always specify a project on resources. Not every resource supports this, sometimes in very weird ways, for example:

  const serviceAccount = new gcp.serviceAccount.Account('sa', {
    project: gcpProject.projectId,
    accountId: 'some-account-id',
    displayName: 'some-display-name',
  });

  const token = new gcp.serviceAccount.Key('sa-key', {
    // The next line is a type error, and we cannot specify a project:
    project: gcpProject.projectId,
    serviceAccountId: serviceAccount.accountId,
  });

This means we can create multiple projects and multiple service accounts in a single stack, but, all service account keys must be created in the project solely defined by the gcp:project environment variable.

Consequently, we must use one stack (and one config file) per GCP project when using some resources, but not others.

increasing `minMasterVersion` of google container cluster previews `replacement` instead of `update`

We have a bunch gke clusters.
I was just planning to upgrade one of their masters by bumping minMasterVersion.
Unfortunately when doing so pulumi preview says it will do a replacement of the entire cluster instead of an update. I'm not sure if this is just an issue with preview but I'm pretty sure bumping the master version shouldn't replace the entire cluster and a replacement of a cluster is a pretty scary operation. As intermediate workaround I upgraded the cluster via GCPs UI and left the pulumi minMasterVersion param untouched.

Example:

existing cluster with:

export const cluster = new gcp.container.Cluster('api-cluster', {
  name: 'foo',
  initialNodeCount: 1,
  minMasterVersion: '1.10.6-gke.1',
});

when changing minMasterVersion to 1.10.12-gke.1 pulumi preview shows the following:

Previewing update (acme/api-cluster-prod-b):

     Type                                                           Name                                   Plan        Info
     pulumi:pulumi:Stack                                            api-cluster-prod-b-api-cluster-prod-b
 +-  β”œβ”€ gcp:container:Cluster                                       api-cluster                            replace     [diff: ~minMasterVersion]

race condition when changing resource ids leads to inconsistent state

This issue pertains to certain resource types, for example gcp.projects.IAMMember and the order of creation / deletion of those resources. It leads to inconsistencies between what was defined in code vs what's ending up as actual configuration / resources in the cloud provider.

Example:

  1. create this code:
import * as gcp from '@pulumi/gcp';

new gcp.projects.IAMMember(`my-id`, {
  member: `user:[email protected]`,
  role: 'roles/bigquery.user',
  project: 'my-project'
});
  1. Run pulumi up. This will grant [email protected] the bigquery role.
  2. Make a change to the resource id:
import * as gcp from '@pulumi/gcp';

new gcp.projects.IAMMember(`my-id-changed`, {
  member: `user:[email protected]`,
  role: 'roles/bigquery.user',
  project: 'my-project'
});
  1. Run pulumi up.

Expected Result

The user [email protected] gets granted the roleroles/bigquery.user.

Actual Result

In step 4 pulumi first creates a new IAMMember with the resource id my-id-changed and afterwards deletes the resource my-id.
The end result is that the user does NOT get the role roles/bigquery.user even though pulumi's state thinks that is the case.

This is probably a wider issue, either with the gcp provider or pulumi itself and IAMMember is just an example.
Fyi terraform replaces the resources in the correct order (i.e. delete old one first, create new one after).

Duplicate URN error when trying to create 2 GKE clusters in differents projects/regions

I'm trying to create several clusters with the same name in different regions within the same stack, but it's impossible (eg. identical clusters named prod in regions europe-west3 and us-east1).
I understand that the cluster name is used by terraform to generate the actual cluster name (with the random string at the end) if you don't specify name in the args. The problem is, the cluster name is used as-is to build the URN, so you can't have 2 clusters named the same, even though they're in different regions/projects and that it's perfectly valid (even in terraform).
Do you have plans to only have unique names by default so it's not up to the user to come up with one ? Here, you could use the combination of project id, region/zone and cluster name to ensure that it's unique.

Cannot update Project folder_id

Somehow pulumi fails while attempting to move a project into a folder.
Versions:
pulumi CLI: v0.17.1
pulumi-gcp: 0.17.1
Runtime: nodejs

Error message:

   * updating urn:pulumi:gcp-project-redacted::gcp-project-redacted::gcp:organizations/project:Project::redacted: 'org_id' and 'folder_id' cannot be both set.

Reproduction:

  1. Step 1: Pulumi up the following program which creates a project directly underneath an org:
import * as gcp from '@pulumi/gcp';

const GCP_ORG_ID = 'foo';
const GCP_BILLING_ACCOUNT_ID = 'bar';

new gcp.organizations.Project(
  'some-project-123',
  {
    name: 'some-project-123',
    projectId: 'some-project-123',
    billingAccount: GCP_BILLING_ACCOUNT_ID,
    orgId: GCP_ORG_ID
  }
);
  1. Step 2: Create a GCP Folder resource and attempt to move the project underneath that folder:
import * as gcp from '@pulumi/gcp';

const GCP_ORG_ID = 'foo';
const GCP_BILLING_ACCOUNT_ID = 'bar';

// this new folder is supposed to contain all prod relevant subprojects underneath.
export const prodFolder = new gcp.organizations.Folder('prod-folder', {
  displayName: 'prod',
  parent: `organizations/${GCP_ORG_ID}`
});

new gcp.organizations.Project(
  'some-project-123',
  {
    name: 'some-project-123',
    projectId: 'some-project-123',
    billingAccount: GCP_BILLING_ACCOUNT_ID,
    folderId: prodFolder.id
  }
);

The error mentioned above will occur. I couldn't really find a good workaround. Even when I move the project underneath the folder manually via the UI pulumi continues to show that error.

Roll GCP credentials

A private key shows up in plain text in Travis logs when doing a build of this repo. We should deactivate that key and roll our credentials before launch.

"create_timeout": [DEPRECATED] Use timeouts block instead.

This warning is always presented when using gcp.compute.Instance, even when the createTimeout property is not provided.

Diagnostics:
  global: global
    warning: urn:pulumi:luke-serverless-gcp::serverless-gcp::gcp:compute/instance:Instance::www-compute verification warning: "create_timeout": [DEPRECATED] Use timeouts block instead.

Issues with beta features: "cluster_autoscaling": this field cannot be set

I just upgraded to [email protected] which is supposedly based on the tf google beta provider.
This comes with support for enabling cluster autoscaling / node auto-provisioning beta in GKE.
However when I try to use that feature I'm getting the following error:

  gcp:container:Cluster (master):
    error: gcp:container/cluster:Cluster resource 'master' has a problem: "cluster_autoscaling": this field cannot be set

This is a snippet of my cluster config:

export const cluster = new gcp.container.Cluster('master', {
  initialNodeCount: 1,
  minMasterVersion: gkeMasterVersion,
  clusterAutoscaling: {
    enabled: true,
    resourceLimits: [
      {
        resourceType: 'cpu',
        minimum: 1,
        maximum: 17
      },
      {
        resourceType: 'memory',
        minimum: 1,
        maximum: 70
      }
    ]
  },
  name: clusterName,
  removeDefaultNodePool: true,
  zone: primaryZone
});

This issue seems to be somewhat related hashicorp/terraform-provider-google#2890 however and the remedy there was to simply upgrade to the beta provider apparently.

Attached disk prevents Instance from being replaced

The following program launches an instance with an attached disk:

let i = 0;
    let d = new gcp.compute.Disk(runName + "-esdata" + i, {size: dataDiskSize, type: "pd-ssd", zone});
    elasticsearchInstances.push(
        new gcp.compute.Instance(runName + "-elasticsearch-data" + i, {
                machineType: "n1-standard-1",
                zone,
                metadata: {"ssh-keys": sshKey},
                metadataStartupScript: esDataNodeStartupScript,
                bootDisk: {initializeParams: {image: machineImage}},
                attachedDisks: [{source: d}],
                networkInterfaces: [{
                    network: computeNetwork.id,
                    accessConfigs: [{}],
                }],
                scheduling: {automaticRestart: false, preemptible: isPreemptible},
                serviceAccount: {
                    scopes: ["https://www.googleapis.com/auth/cloud-platform", "compute-rw"],
                },
                tags: [clusterName, runName],
            }
        )
    );

Running pulumi up completes fine, but updating some instance parameters (e.g. startup script) requires replacing the machine when running pulumi update. However, the following error is received:

error: Plan apply failed: Error creating instance: googleapi: Error 400: The disk resource 'esdata0-86abfa9' is already being used by 'elasticsearch-data0-5df5543', resourceInUseByAnotherResource

We should be able to replace an instance by detaching the disk from existing instance, and attaching it to the new launched instance.

`<Resource>.get(...)` requires resource id and name to be specified redundantly

In GCP for most resources there is no separate notion of a an ID and name (unlike AWS, where resources often have a random ID and a separate deterministic name).

When using the .get method on GCP resources pulumi requires the ID / name to be specified redundantly and if one of them is omitted or does not match the other one, the whole .get fails.

So far I can confirm this issue exists with GCS buckets and GKE clusters. Probably this issue exists with many more resource types.

Here's an example to reproduce the issue when using GCS buckets:

In the examples below I will assume that gsutil AND pulumi have been already configured to use the gcp project my-project (i.e. via pulumi config set gcp:project my-project). I will add some example code and paste the output (error or sucess) after a pulumi preview of the example code.

Prerequisite: create GCS bucket via gsutil mb gs://pulumi-experiment-bucket

1. FAILURE with lookup by ID only:

import * as gcp from '@pulumi/gcp';

const myBucket = gcp.storage.Bucket.get(
  'my-bucket',
  'pulumi-experiment-bucket'
);

yields:

Previewing changes:

     Type                   Name                                            Plan       Info
 +   pulumi:pulumi:Stack    pulumi-query-experiments-christian-experiments  create
 >-  └─ gcp:storage:Bucket  my-bucket                                       read       1 error

Diagnostics:
  gcp:storage:Bucket: my-bucket
    error: Preview failed: refreshing urn:pulumi:christian-experiments::pulumi-query-experiments::gcp:storage/bucket:Bucket::my-bucket: Error reading Storage Bucket "": googleapi: Error 400: Required parameter: project, required

error: an error occurred while advancing the preview

Note: The error message says project is required, even though it is already set in the pulumi provider config, so this error message is at the very least misleading.

2. FAILURE when specifying non-matching ID and name:

import * as gcp from '@pulumi/gcp';

const myBucket = gcp.storage.Bucket.get(
  'my-bucket',
  'pulumi-experiment-bucket-foo',
  {name: 'pulumi-experiment-bucket'}
);

yields:

Previewing changes:

     Type                   Name                                            Plan       Info
 +   pulumi:pulumi:Stack    pulumi-query-experiments-christian-experiments  create
 >-  └─ gcp:storage:Bucket  my-bucket                                       read       1 error

Diagnostics:
  gcp:storage:Bucket: my-bucket
    error: Preview failed: reading resource urn:pulumi:christian-experiments::pulumi-query-experiments::gcp:storage/bucket:Bucket::my-bucket yielded an unexpected ID;expected pulumi-experiment-bucket-foo, got pulumi-experiment-bucket

error: an error occurred while advancing the preview
➜ 

3. SUCCESS when specifying id and name redundantly

import * as gcp from '@pulumi/gcp';

const myBucket = gcp.storage.Bucket.get(
  'my-bucket',
  'pulumi-experiment-bucket',
  {name: 'pulumi-experiment-bucket'}
);

yields:

Previewing changes:

     Type                   Name                                            Plan       Info
 +   pulumi:pulumi:Stack    pulumi-query-experiments-christian-experiments  create
 >-  └─ gcp:storage:Bucket  my-bucket                                       read

info: 2 changes previewed:
    + 1 resource to create
    >-1 resource to read

Summary

So in summary importing / querying a resource this way only works when the same name and id are specified redundantly and otherwise one gets rather confusing error messages.

This behaviour is confusing for the following 2 reasons:

  1. As an initial user one assumes that specifying an id like pulumi-experiment-bucket should be enough to uniquely identify the resource (i.e. the GCS bucket).
  2. The type signature of .get marks the third function parameter (i.e. BucketState) AND the name property as optional, even though they are in practice always required as demonstrated above.

For comparison: The equivalent terraform data source only requires the name property and no additional ID: https://www.terraform.io/docs/providers/google/r/storage_bucket.html#argument-reference

Desired behaviour

Only require ID or name param to lookup a resource.

Support managing GSuite Groups / port over terraform gsuite provider

GCP relies to a large extent on GSuite for authentication and there are some things like β€œGroups” and Group Memberships that can only be managed via GSuite APIs. It is official best practice (i.e. recommended by google and in the GCP docs) to grant GCP IAM roles to Google Groups instead of granting roles to users directly. Currently I’m managing groups and group memberships manually via the UI, but it would be awesome if I could do so via pulumi instead.
There is a terraform provider which supports that: https://github.com/DeviaVir/terraform-provider-gsuite.

Please port over this provider to pulumi (optionally include it simply in the GCP provider).

prior slack discussion: https://pulumi-community.slack.com/archives/C84L4E3N1/p1540975683234600

Assigning static internal ip to instances

Hi,

I am setting up a cassandra cluster in GCP using pulumi. I would like to assign a static internal ip to each of the cassandra nodes, so this is my script:

const services = new gcp.projects.Services("services", {   
//    project: 'apigee-csa-meetup-kong',
    services: ["compute.googleapis.com"],
});

const network = new gcp.compute.Network("kong-vpc", {
//    project: 'apigee-csa-meetup-kong',
    autoCreateSubnetworks: false
}, {
    dependsOn: [ services ]
});

const dataSubnetwork = new gcp.compute.Subnetwork("kong-data-subnet", {
    ipCidrRange: "10.2.1.0/24",
    network: network.selfLink
});

const cassandraNodeTemplate = new gcp.compute.InstanceTemplate("cassandra-node-template", {
    canIpForward: false,
    description: "This template is used for Cassandra nodes",
    disks: [
        {
            autoDelete: true,
            boot: true,
            sourceImage: "centos-cloud/centos-7",
            diskSizeGb: 100
        }
    ],
    instanceDescription: "Cassandra node",
    machineType: "n1-standard-4",
    networkInterfaces: [{
        subnetwork: dataSubnetwork.selfLink
    }],
    schedulings: [{
        automaticRestart: true,
        onHostMaintenance: "MIGRATE",
    }],
    tags: [
        "cassandra"
    ],
});

const cassandraNodeIps = [];
const cassandraNodes = [];

for (let i = 0; i < 3; i++) {
    cassandraNodeIps.push(new gcp.compute.Address(`cassandra-node-${i}-ip`, {
        address: "10.2.1." + (2 + i),
        addressType: "INTERNAL",
        subnetwork: dataSubnetwork.selfLink,
    }));
}

for (let i = 0; i < 3; i++) {
    cassandraNodes.push(new gcp.compute.InstanceFromTemplate(`cassandra-node-${i}`, {
        sourceInstanceTemplate: cassandraNodeTemplate.selfLink,
        zone: "europe-west3-a",
        networkInterfaces: [
            {
                subnetwork: dataSubnetwork.selfLink,
                network_ip: cassandraNodeIps[i].selfLink
            }
        ],
        tags: [ "cassandra" ]
    });
}

If I run pulumi up and then I make a change to my script, for instance to add metadataStartupScript and rerun the pulumi up command, I am alway running into an error when it is replacing the instances saying error: Plan apply failed: IP '10.2.1.2' is already being used by another resource

Am I doing something wrong? How can I get around this?

GKE cluster being recreated occasionally with no changes

From a user on the Pulumi Community Slack:

Yesterday, i recreated a K8s cluster to use some new features from the latest version of pulumi/gcp. After recreating the cluster, i started to see the following error message regularly:

kubernetes:core:ConfigMap (api-config-map):
  warning: The provider for this resource has inputs that are not known during preview.
  This preview may not correctly represent the changes that will be applied during an update.

The error is inconsistent and when it happens, a new cluster is created and the previous one is marked to be deleted, what shouldn’t be possible because it has the flag protect.

The k8s resources from the previous cluster are moved to the new cluster instantly, but they’re not created in the new cluster actually. Also, to make any updates to the stack, the previous cluster needs to be deleted.

After downgrading this project back to the version 17.1 of the pulumi/gcp package, i literally had the same issue.

Another cluster that is running this same version pulumi/gcp version is working properly. Both clusters are using the same Pulumi version (17.2).

What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes. Don’t know if that’s what’s happening there.

I tried to create this same cluster about 10 times. I had issues in all of them.

Ah… they have something in common. All the new clusters that are having this issue are using one of the latest Kubernetes versions available in GCP (1.12.5-gke.5).

GKE Cluster replace before delete doesn't work with some config options

I have changed some of the cluster settings that caused failure during pulumi apply. The issue was that pulumi attempted to do replace before delete for a cluster that was explicitly specifying services and pods subnetworks. The update fails with:

Diagnostics:
  gcp:container:Cluster (default):
    error: Plan apply failed: Error waiting for creating GKE cluster: Retry budget exhausted (5 attempts): Services range "services" in network "default", subnetwork "europe-west1-b" is already used by another cluster.

As you can see the replacement cluster will always fail as it's trying to use the subnetwork already used by the existing cluster.

It seems that for some cluster setups it needs to always to delete before replace.

deleteBeforeReplace not being honored in Go - resources with persistent elements cannot be upgraded

Hello,

trying to create an instance with a static IP, I'm afraid I've found out updating some parameters (like the startup script) becomes impossible as the deleteBeforeReplace that is stated in the docs is not honored.

Given this code:

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		const instancePrefix = "remove-me-when-upgrading"

		staticIP, err := compute.NewAddress(ctx, instancePrefix+"-ip", &compute.AddressArgs{})

		instanceConfig := &compute.InstanceArgs{
			MachineType: "f1-micro",
			BootDisk: map[string]interface{}{
				"initializeParams": map[string]string{
					"image": "ubuntu-os-cloud/ubuntu-minimal-1804-bionic-v20190403",
				},
			},
			MetadataStartupScript:  "echo " + time.Now().String(),
			AllowStoppingForUpdate: true,
			NetworkInterfaces: []interface{}{
				map[string]interface{}{
					"network": "default",
					"accessConfigs": []map[string]interface{}{
						map[string]interface{}{
							"natIp": staticIP.Address(),
						},
					},
				},
			},
		}

		_, err = compute.NewInstance(ctx, instancePrefix, instanceConfig, pulumi.ResourceOpt{
			DeleteBeforeReplace: true,
		})
		return err
	})
}

First execution is successful:

Updating (dev):

     Type                     Name                         Status      
 +   pulumi:pulumi:Stack      pulumiExample-dev            created     
 +   β”œβ”€ gcp:compute:Address   remove-me-when-upgrading-ip  created     
 +   └─ gcp:compute:Instance  remove-me-when-upgrading     created     
 
Resources:
    + 3 created

Duration: 57s

But the second (that brings a change in the startup script) cannot be applied:

 $ pulumi up
Previewing update (dev):

     Type                     Name                      Plan        Info
     pulumi:pulumi:Stack      pulumiExample-dev                     
 +-  └─ gcp:compute:Instance  remove-me-when-upgrading  replace     [diff: ~metadataStartupScript,name]
 
Resources:
    +-1 to replace
    2 unchanged

Do you want to perform this update? yes
Updating (dev):

     Type                     Name                      Status                   Info
     pulumi:pulumi:Stack      pulumiExample-dev         **failed**               1 error
 +-  └─ gcp:compute:Instance  remove-me-when-upgrading  **replacing failed**     [diff: ~metadataStartupScript,name]; 1 error
 
Diagnostics:
  pulumi:pulumi:Stack (pulumiExample-dev):
    error: update failed
 
  gcp:compute:Instance (remove-me-when-upgrading):
    error: Plan apply failed: Error creating instance: googleapi: Error 400: Invalid resource usage: 'External IP address: 35.205.71.221 is already in-use.'., invalidResourceUsage
 
Resources:
    2 unchanged

Duration: 3s

Changing the value of DeleteBeforeReplace does not affect to the execution plan, while the expectation is the instance being removed first so a new one becomes attached to the original static IP.

Execution environment:

  • Sample source code: gist
  • Client version:
$ pulumi version
v0.17.4
  • Plugin version:
$ pulumi plugin ls
NAME  KIND      VERSION  SIZE   INSTALLED  LAST USED
gcp   resource  0.18.2   64 MB  n/a        15 hours ago
gcp   resource  0.16.8   61 MB  n/a        15 hours ago
  • Go version:
$ go version
go version go1.12.2 linux/amd64

project: required field is not set

@lukehoban I am new to Pulumi and I have been trying to play around with it but nothing seems to work. What am I doing wrong here? I just wanted to create a bucket but I keep getting the error project: required field is not set. I even tried setting the gcp.config.project=PROJECT_NAME but even that didnt work.

How can I debug issues like these?

const gcp = require('@pulumi/gcp');

const bucket = new gcp.storage.Bucket('pulumi-demo');

// Stack exports
exports.bucketName = bucket.bucket;
Diagnostics:
  gcp:storage:Bucket: pulumi-demo
    error: Plan apply failed: creating urn:pulumi:pulumi-demo-dev::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo: project: required field is not set

    error: update failed

GCP Bringup Tracking Issue

This meta-issue tracks the paper cuts that I run into. I'll split them out into bugs once we stabilize this repo a little bit. I'll be editing this issue continuously as I go.

  • Get the provider building with dep so we can ingest it in this repo pulumi/terraform-provider-google#2
  • Getting authentication right is not straightforward and is going to require a lot of docs.
  • Pulumi CLI is occasionally slow to shutdown whenever an error occurs (blocks for 30 seconds)
  • Examples can't enable Services due to hashicorp/terraform-provider-google#1579
  • pulumi/pulumi-terraform#40 - exposing a compute instance to the Internet requires you to give an empty map to a certain property, which triggers this bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.