pulumi / pulumi-gcp Goto Github PK
View Code? Open in Web Editor NEWA Google Cloud Platform (GCP) Pulumi resource package, providing multi-language access to GCP
License: Apache License 2.0
A Google Cloud Platform (GCP) Pulumi resource package, providing multi-language access to GCP
License: Apache License 2.0
While setting up Network Peering we ran into some issues with the peering connection configuration. We are not able to enable the "Exchange custom routes" configuration:
I believe it is the PeeringNetwork
object in the addPeering
request: https://cloud.google.com/compute/docs/reference/rest/v1/networks/addPeering
Currenlty serverless.Function
only supports http as trigger apparently.
It'd like to use a pubsub topic as trigger.
This should be easily possible by passingtriggerTopic
here: https://github.com/pulumi/pulumi-gcp/blob/master/overlays/nodejs/serverless/function.ts#L114
The following program launches an instance with an attached disk:
let i = 0;
let d = new gcp.compute.Disk(runName + "-esdata" + i, {size: dataDiskSize, type: "pd-ssd", zone});
elasticsearchInstances.push(
new gcp.compute.Instance(runName + "-elasticsearch-data" + i, {
machineType: "n1-standard-1",
zone,
metadata: {"ssh-keys": sshKey},
metadataStartupScript: esDataNodeStartupScript,
bootDisk: {initializeParams: {image: machineImage}},
attachedDisks: [{source: d}],
networkInterfaces: [{
network: computeNetwork.id,
accessConfigs: [{}],
}],
scheduling: {automaticRestart: false, preemptible: isPreemptible},
serviceAccount: {
scopes: ["https://www.googleapis.com/auth/cloud-platform", "compute-rw"],
},
tags: [clusterName, runName],
}
)
);
Running pulumi up
completes fine, but updating some instance parameters (e.g. startup script) requires replacing the machine when running pulumi update
. However, the following error is received:
error: Plan apply failed: Error creating instance: googleapi: Error 400: The disk resource 'esdata0-86abfa9' is already being used by 'elasticsearch-data0-5df5543', resourceInUseByAnotherResource
We should be able to replace an instance by detaching the disk from existing instance, and attaching it to the new launched instance.
Similar to what we did in AWS here: pulumi/pulumi-aws#365
This would expose the Google's cloud sdk: https://github.com/googleapis/google-cloud-node
If I neglect to set my project, region, or zone configuration, I get an unhelpful error message:
error: Plan apply failed: project: required field is not set
Given that this will happen almost guaranteed for every new user of Pulumi on GCP, we should try to make this experience better. IIRC, we have some trick that we already use in the AWS package to hook errors like this and prettify them before displaying to the end user.
We have a bunch gke clusters.
I was just planning to upgrade one of their masters by bumping minMasterVersion
.
Unfortunately when doing so pulumi preview
says it will do a replacement of the entire cluster instead of an update. I'm not sure if this is just an issue with preview
but I'm pretty sure bumping the master version shouldn't replace the entire cluster and a replacement of a cluster is a pretty scary operation. As intermediate workaround I upgraded the cluster via GCPs UI and left the pulumi minMasterVersion param untouched.
Example:
existing cluster with:
export const cluster = new gcp.container.Cluster('api-cluster', {
name: 'foo',
initialNodeCount: 1,
minMasterVersion: '1.10.6-gke.1',
});
when changing minMasterVersion
to 1.10.12-gke.1
pulumi preview shows the following:
Previewing update (acme/api-cluster-prod-b):
Type Name Plan Info
pulumi:pulumi:Stack api-cluster-prod-b-api-cluster-prod-b
+- ├─ gcp:container:Cluster api-cluster replace [diff: ~minMasterVersion]
Hello,
trying to create an instance with a static IP, I'm afraid I've found out updating some parameters (like the startup script) becomes impossible as the deleteBeforeReplace that is stated in the docs is not honored.
Given this code:
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
const instancePrefix = "remove-me-when-upgrading"
staticIP, err := compute.NewAddress(ctx, instancePrefix+"-ip", &compute.AddressArgs{})
instanceConfig := &compute.InstanceArgs{
MachineType: "f1-micro",
BootDisk: map[string]interface{}{
"initializeParams": map[string]string{
"image": "ubuntu-os-cloud/ubuntu-minimal-1804-bionic-v20190403",
},
},
MetadataStartupScript: "echo " + time.Now().String(),
AllowStoppingForUpdate: true,
NetworkInterfaces: []interface{}{
map[string]interface{}{
"network": "default",
"accessConfigs": []map[string]interface{}{
map[string]interface{}{
"natIp": staticIP.Address(),
},
},
},
},
}
_, err = compute.NewInstance(ctx, instancePrefix, instanceConfig, pulumi.ResourceOpt{
DeleteBeforeReplace: true,
})
return err
})
}
First execution is successful:
Updating (dev):
Type Name Status
+ pulumi:pulumi:Stack pulumiExample-dev created
+ ├─ gcp:compute:Address remove-me-when-upgrading-ip created
+ └─ gcp:compute:Instance remove-me-when-upgrading created
Resources:
+ 3 created
Duration: 57s
But the second (that brings a change in the startup script) cannot be applied:
$ pulumi up
Previewing update (dev):
Type Name Plan Info
pulumi:pulumi:Stack pulumiExample-dev
+- └─ gcp:compute:Instance remove-me-when-upgrading replace [diff: ~metadataStartupScript,name]
Resources:
+-1 to replace
2 unchanged
Do you want to perform this update? yes
Updating (dev):
Type Name Status Info
pulumi:pulumi:Stack pulumiExample-dev **failed** 1 error
+- └─ gcp:compute:Instance remove-me-when-upgrading **replacing failed** [diff: ~metadataStartupScript,name]; 1 error
Diagnostics:
pulumi:pulumi:Stack (pulumiExample-dev):
error: update failed
gcp:compute:Instance (remove-me-when-upgrading):
error: Plan apply failed: Error creating instance: googleapi: Error 400: Invalid resource usage: 'External IP address: 35.205.71.221 is already in-use.'., invalidResourceUsage
Resources:
2 unchanged
Duration: 3s
Changing the value of DeleteBeforeReplace
does not affect to the execution plan, while the expectation is the instance being removed first so a new one becomes attached to the original static IP.
Execution environment:
$ pulumi version
v0.17.4
$ pulumi plugin ls
NAME KIND VERSION SIZE INSTALLED LAST USED
gcp resource 0.18.2 64 MB n/a 15 hours ago
gcp resource 0.16.8 61 MB n/a 15 hours ago
$ go version
go version go1.12.2 linux/amd64
This does not work:
gcp.container.Cluster.get("<cluster-name>", "<cluster-name>", {
project: "<project-name>"
});
but this does:
gcp.container.Cluster.get("<cluster-name>", "<cluster-name>", {
name: "<cluster-name>",
project: "<project-name>"
});
this is confusing -- why is name
required. Is this the desired behavior?
Pulumi fork of Terraform-provider-google obliterates the vendor tree and gets the vendor based on the latest iterations of the dependencies without any constraints. This diverges quite a bit with released version and could be potentially a problem merging back and/or integrating further changes from the master. We need to think through what is the best way to address this.
You can set GOOGLE_PROJECT
, GOOGLE_REGION
, and GOOGLE_ZONE
to do this, but it would be great if we could do this via config. Many GCP resources require these three values and it is cumbersome to pass them around everywhere in a GCP program.
This meta-issue tracks the paper cuts that I run into. I'll split them out into bugs once we stabilize this repo a little bit. I'll be editing this issue continuously as I go.
dep
so we can ingest it in this repo pulumi/terraform-provider-google#2The following code:
let bucket = new gcp.storage.Bucket("bucket", {});
let data = new gcp.storage.BucketObject("object", {
bucket: bucket.name,
source: new pulumi.asset.AssetArchive({
".": new pulumi.asset.FileArchive("./javascript"),
}),
});
fails to be run by pulumi update
:
Previewing changes:
Type Name Plan Info
+ pulumi:pulumi:Stack gcp-blablabla-gcp-blablabla create 1 error
+ └─ gcp:storage:Bucket bucket create
Diagnostics:
pulumi:pulumi:Stack: gcp-blablabla-gcp-blablabla
error: unexpected archive source
error: an error occurred while advancing the preview
When trying to bring up a stack that included a gcp.sql.DatabaseInstance
, I got this error:
Plan apply failed: Error, failed to create instance unleash: googleapi: Error 403: Access Not Configured. Cloud SQL Admin API has not been used in project 563584335869 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=563584335869 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry., accessNotConfigured
I do have sqladmin.googleapis.com enabled, but 563584335869 is not my project.
This warning is always presented when using gcp.compute.Instance
, even when the createTimeout
property is not provided.
Diagnostics:
global: global
warning: urn:pulumi:luke-serverless-gcp::serverless-gcp::gcp:compute/instance:Instance::www-compute verification warning: "create_timeout": [DEPRECATED] Use timeouts block instead.
I just upgraded to [email protected] which is supposedly based on the tf google beta provider.
This comes with support for enabling cluster autoscaling / node auto-provisioning beta in GKE.
However when I try to use that feature I'm getting the following error:
gcp:container:Cluster (master):
error: gcp:container/cluster:Cluster resource 'master' has a problem: "cluster_autoscaling": this field cannot be set
This is a snippet of my cluster config:
export const cluster = new gcp.container.Cluster('master', {
initialNodeCount: 1,
minMasterVersion: gkeMasterVersion,
clusterAutoscaling: {
enabled: true,
resourceLimits: [
{
resourceType: 'cpu',
minimum: 1,
maximum: 17
},
{
resourceType: 'memory',
minimum: 1,
maximum: 70
}
]
},
name: clusterName,
removeDefaultNodePool: true,
zone: primaryZone
});
This issue seems to be somewhat related hashicorp/terraform-provider-google#2890 however and the remedy there was to simply upgrade to the beta provider apparently.
It came recently to my attention that google is apparently planning to branch very soon (2.0.0 release) the google terraform provider into 2 flavors (https://www.terraform.io/docs/providers/google/provider_versions.html). One regular (stable) and one beta provider. The difference between the two is that the regular one only supports GCP APIs which are marked as stable whereas the beta one also supports beta APIs and features.
Singe google has a philosophy of keeping things pretty long in beta, the 1.x.x provider contains a bunch of beta features that are already used by customers in production (Personally we use gke nodepool taints, which are considered beta by google and will be removed in the future provider version).
While this is currently not an issue (since pulumi adopts the 1.x.x version of the terraform provider) I think this may be come an issue in near future and I wanted to bring this to pulumi's attention early enough. I'm not sure what's the right solution but it certainly would be good if pulumi customers have the choice to only use stable or beta google apis, just like they would have when using terraform. This may require creating 2 pulumi gcp flavors as well.
On another note: Google recently moved to auto-generate most of the google terraform provider via some meta module called 'magic-modules': https://github.com/GoogleCloudPlatform/magic-modules/ . The idea of magic-modules is in my understanding to auto-generate SDKs for different languages and platforms (currently terraform, ansible, puppet, chef). Mid term this might be an opportunity for pulumi to auto-generate the pulumi gcp module directly based on the magic-modules instead of the terraform provider.
Tracking bug for me to make this happen.
GCP relies to a large extent on GSuite for authentication and there are some things like “Groups” and Group Memberships that can only be managed via GSuite APIs. It is official best practice (i.e. recommended by google and in the GCP docs) to grant GCP IAM roles to Google Groups instead of granting roles to users directly. Currently I’m managing groups and group memberships manually via the UI, but it would be awesome if I could do so via pulumi instead.
There is a terraform provider which supports that: https://github.com/DeviaVir/terraform-provider-gsuite.
Please port over this provider to pulumi (optionally include it simply in the GCP provider).
prior slack discussion: https://pulumi-community.slack.com/archives/C84L4E3N1/p1540975683234600
I'm trying to create several clusters with the same name in different regions within the same stack, but it's impossible (eg. identical clusters named prod
in regions europe-west3
and us-east1
).
I understand that the cluster name is used by terraform to generate the actual cluster name (with the random string at the end) if you don't specify name
in the args. The problem is, the cluster name is used as-is to build the URN, so you can't have 2 clusters named the same, even though they're in different regions/projects and that it's perfectly valid (even in terraform).
Do you have plans to only have unique names by default so it's not up to the user to come up with one ? Here, you could use the combination of project id, region/zone and cluster name to ensure that it's unique.
In GCP for most resources there is no separate notion of a an ID and name (unlike AWS, where resources often have a random ID and a separate deterministic name).
When using the .get
method on GCP resources pulumi requires the ID / name to be specified redundantly and if one of them is omitted or does not match the other one, the whole .get
fails.
So far I can confirm this issue exists with GCS buckets and GKE clusters. Probably this issue exists with many more resource types.
Here's an example to reproduce the issue when using GCS buckets:
In the examples below I will assume that gsutil AND pulumi have been already configured to use the gcp project my-project
(i.e. via pulumi config set gcp:project my-project
). I will add some example code and paste the output (error or sucess) after a pulumi preview
of the example code.
Prerequisite: create GCS bucket via gsutil mb gs://pulumi-experiment-bucket
import * as gcp from '@pulumi/gcp';
const myBucket = gcp.storage.Bucket.get(
'my-bucket',
'pulumi-experiment-bucket'
);
yields:
Previewing changes:
Type Name Plan Info
+ pulumi:pulumi:Stack pulumi-query-experiments-christian-experiments create
>- └─ gcp:storage:Bucket my-bucket read 1 error
Diagnostics:
gcp:storage:Bucket: my-bucket
error: Preview failed: refreshing urn:pulumi:christian-experiments::pulumi-query-experiments::gcp:storage/bucket:Bucket::my-bucket: Error reading Storage Bucket "": googleapi: Error 400: Required parameter: project, required
error: an error occurred while advancing the preview
Note: The error message says project
is required, even though it is already set in the pulumi provider config, so this error message is at the very least misleading.
import * as gcp from '@pulumi/gcp';
const myBucket = gcp.storage.Bucket.get(
'my-bucket',
'pulumi-experiment-bucket-foo',
{name: 'pulumi-experiment-bucket'}
);
yields:
Previewing changes:
Type Name Plan Info
+ pulumi:pulumi:Stack pulumi-query-experiments-christian-experiments create
>- └─ gcp:storage:Bucket my-bucket read 1 error
Diagnostics:
gcp:storage:Bucket: my-bucket
error: Preview failed: reading resource urn:pulumi:christian-experiments::pulumi-query-experiments::gcp:storage/bucket:Bucket::my-bucket yielded an unexpected ID;expected pulumi-experiment-bucket-foo, got pulumi-experiment-bucket
error: an error occurred while advancing the preview
➜
import * as gcp from '@pulumi/gcp';
const myBucket = gcp.storage.Bucket.get(
'my-bucket',
'pulumi-experiment-bucket',
{name: 'pulumi-experiment-bucket'}
);
yields:
Previewing changes:
Type Name Plan Info
+ pulumi:pulumi:Stack pulumi-query-experiments-christian-experiments create
>- └─ gcp:storage:Bucket my-bucket read
info: 2 changes previewed:
+ 1 resource to create
>-1 resource to read
So in summary importing / querying a resource this way only works when the same name and id are specified redundantly and otherwise one gets rather confusing error messages.
id
like pulumi-experiment-bucket
should be enough to uniquely identify the resource (i.e. the GCS bucket)..get
marks the third function parameter (i.e. BucketState) AND the name
property as optional, even though they are in practice always required as demonstrated above.For comparison: The equivalent terraform data source only requires the name
property and no additional ID: https://www.terraform.io/docs/providers/google/r/storage_bucket.html#argument-reference
Only require ID or name param to lookup a resource.
Edit: There is a way to do this using a custom provider, as described in the comments below.
When creating a project-heterogeneous stack (for simplified IAM or other reasons), it would be nice to be able to always specify a project
on resources. Not every resource supports this, sometimes in very weird ways, for example:
const serviceAccount = new gcp.serviceAccount.Account('sa', {
project: gcpProject.projectId,
accountId: 'some-account-id',
displayName: 'some-display-name',
});
const token = new gcp.serviceAccount.Key('sa-key', {
// The next line is a type error, and we cannot specify a project:
project: gcpProject.projectId,
serviceAccountId: serviceAccount.accountId,
});
This means we can create multiple projects and multiple service accounts in a single stack, but, all service account keys must be created in the project solely defined by the gcp:project
environment variable.
Consequently, we must use one stack (and one config file) per GCP project when using some resources, but not others.
Discussion on this matter with @clstokes on slack here : https://pulumi-community.slack.com/archives/C84L4E3N1/p1548693619818500
When creating an object with the source
parameter on a gcp.storage.BucketObject
pulumi (or the provider) does not detect a change on the local file and thus does not update the related bucket object.
new gcp.storage.BucketObject('my-object', {
bucket: 'bucket-name',
name: 'my-object',
source: 'absolute/path/to/file.ext',
});
pulumi up
succeeds to create the object in the bucketpulumi up
again -> no change detectedRight now here (as in #134) we see forks fail to build as the secret material is not available to decrypt and openssl
fails early on.
When I create this role:
export const testCiRole = new gcp.projects.IAMCustomRole(config.appName, {
roleId: "test-ci",
title: "Test CI role",
project: config.project,
permissions: [...]
})
I get this error:
error: Plan apply failed: Unable to verify whether custom project role projects/pulumi-development/roles/test-ci already exists and must be undeleted: Error reading Custom Project Role "projects/pulumi-development/roles/test-ci": googleapi: Error 400: The role name must be in the form "roles/{role}", "organizations/{organization_id}/roles/{role}", or "projects/{project_id}/roles/{role}"., badRequest
The problem is actually that there is a -
in the roleId
. If you use testci
instead, it works.
At the moment, the respective bucket location of a newly created CloudFunction is not configurable. Therefore, it falls back to us-central1
. It should be possible to pass all options to the bucket creation process.
If I do:
bucket.onObjectFinalized("newObject", async (data) => {
console.log(data);
});
I get:
error: Plan apply failed: googleapi: Error 400: Invalid bucket name: 'newObject-f691b50', invalid
It turns out these names can only include lowercase letters (among other resrictions at https://cloud.google.com/storage/docs/naming), though the error message unfortunately makes this unclear.
We may need to lowercase the name we pass along on the users' behalf here, and may also want to enforce the 63 character limit on the underlying Bucket name so that our addition of some random hex doesn't accidentally trigger issues.
Hi,
I am setting up a cassandra cluster in GCP using pulumi. I would like to assign a static internal ip to each of the cassandra nodes, so this is my script:
const services = new gcp.projects.Services("services", {
// project: 'apigee-csa-meetup-kong',
services: ["compute.googleapis.com"],
});
const network = new gcp.compute.Network("kong-vpc", {
// project: 'apigee-csa-meetup-kong',
autoCreateSubnetworks: false
}, {
dependsOn: [ services ]
});
const dataSubnetwork = new gcp.compute.Subnetwork("kong-data-subnet", {
ipCidrRange: "10.2.1.0/24",
network: network.selfLink
});
const cassandraNodeTemplate = new gcp.compute.InstanceTemplate("cassandra-node-template", {
canIpForward: false,
description: "This template is used for Cassandra nodes",
disks: [
{
autoDelete: true,
boot: true,
sourceImage: "centos-cloud/centos-7",
diskSizeGb: 100
}
],
instanceDescription: "Cassandra node",
machineType: "n1-standard-4",
networkInterfaces: [{
subnetwork: dataSubnetwork.selfLink
}],
schedulings: [{
automaticRestart: true,
onHostMaintenance: "MIGRATE",
}],
tags: [
"cassandra"
],
});
const cassandraNodeIps = [];
const cassandraNodes = [];
for (let i = 0; i < 3; i++) {
cassandraNodeIps.push(new gcp.compute.Address(`cassandra-node-${i}-ip`, {
address: "10.2.1." + (2 + i),
addressType: "INTERNAL",
subnetwork: dataSubnetwork.selfLink,
}));
}
for (let i = 0; i < 3; i++) {
cassandraNodes.push(new gcp.compute.InstanceFromTemplate(`cassandra-node-${i}`, {
sourceInstanceTemplate: cassandraNodeTemplate.selfLink,
zone: "europe-west3-a",
networkInterfaces: [
{
subnetwork: dataSubnetwork.selfLink,
network_ip: cassandraNodeIps[i].selfLink
}
],
tags: [ "cassandra" ]
});
}
If I run pulumi up
and then I make a change to my script, for instance to add metadataStartupScript and rerun the pulumi up
command, I am alway running into an error when it is replacing the instances saying error: Plan apply failed: IP '10.2.1.2' is already being used by another resource
Am I doing something wrong? How can I get around this?
@clstokes pointed me to a page on the Pulumi website which neatly documents how to configure the provider - we should consider adding a link to the README in this repository.
A private key shows up in plain text in Travis logs when doing a build of this repo. We should deactivate that key and roll our credentials before launch.
I have changed some of the cluster settings that caused failure during pulumi apply
. The issue was that pulumi attempted to do replace before delete for a cluster that was explicitly specifying services and pods subnetworks. The update fails with:
Diagnostics:
gcp:container:Cluster (default):
error: Plan apply failed: Error waiting for creating GKE cluster: Retry budget exhausted (5 attempts): Services range "services" in network "default", subnetwork "europe-west1-b" is already used by another cluster.
As you can see the replacement cluster will always fail as it's trying to use the subnetwork already used by the existing cluster.
It seems that for some cluster setups it needs to always to delete before replace.
@pulumi/[email protected]
export const k8sCluster = new gcp.container.Cluster('gke-cluster', {
initialNodeCount: 1,
nodeVersion: 'latest',
minMasterVersion: 'latest',
nodeConfig: {
machineType: 'n1-standard-1',
oauthScopes: [
'https://www.googleapis.com/auth/compute',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring',
],
},
// regional cluster
location: 'us-central1',
nodeLocations: ['us-central1-f', 'us-central1-b'],
})
gcp:container:Cluster (gke-cluster):
error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: location
error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: node_locations
This should be possible based on #119
Pulumi crashes when attempting to import a NodePool via gcp.container.NodePool.get
.
Code to reproduce:
const defaultPool = gcp.container.NodePool.get('default-pool', 'default-pool', {
cluster: 'my-cluster'
name: 'default-pool',
zone: 'us-west1'
});
crash logs
panic: runtime error: index out of range
goroutine 91 [running]:
github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google.getNodePoolName(0xc420956030, 0xc, 0xc420728810, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google/resource_container_node_pool.go:749 +0x7b
github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google.resourceContainerNodePoolExists(0xc42013c4d0, 0x27b8360, 0xc42008c160, 0xc42013c4d0, 0x0, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/terraform-providers/terraform-provider-google/google/resource_container_node_pool.go:393 +0xc0
github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(0xc4203b0770, 0xc420912050, 0x27b8360, 0xc42008c160, 0xc4203f04a8, 0x1, 0x2a137a0)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:329 +0x36f
github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema.(*Provider).Refresh(0xc4203e4460, 0xc42090d5c0, 0xc420912050, 0xc4203e1ef0, 0x0, 0xc420764000)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/hashicorp/terraform/helper/schema/provider.go:308 +0x9a
github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi-terraform/pkg/tfbridge.(*Provider).Read(0xc420351680, 0x38f4620, 0xc4207281e0, 0xc420912000, 0xc420351680, 0x1, 0x1)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi-terraform/pkg/tfbridge/provider.go:517 +0x5f5
github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x38f4620, 0xc4207281e0, 0x2a70960, 0xc420912000, 0x38f4620, 0xc4207281e0, 0x38faac0, 0x39a9b10)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1247 +0x86
github.com/pulumi/pulumi-gcp/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x38f4620, 0xc4207281e0, 0x2a70960, 0xc420912000, 0xc42094a0a0, 0xc42094a0c0, 0x0, 0x0, 0x38dfde0, 0xc420427580)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:57 +0x2d7
github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x2b4b4c0, 0xc420351680, 0x38f4620, 0xc420728060, 0xc42013c070, 0xc42043a700, 0x0, 0x0, 0xc420558a50, 0x1261747)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1249 +0x16d
github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690, 0xc4205650b0, 0x397b1f8, 0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:1011 +0x50b
github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:1249 +0x1528
github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc420482050, 0xc4201f01c0, 0x38f9960, 0xc420162000, 0xc42034e690)
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:680 +0x9f
created by github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/home/travis/gopath/src/github.com/pulumi/pulumi-gcp/vendor/google.golang.org/grpc/server.go:678 +0xa1
error: Running program '/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b' failed with an unhandled exception:
error: Error: invocation of gcp:container/getCluster:getCluster returned an error: transport is closing
at monitor.invoke (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/@solvvy/pulumi-util/node_modules/@pulumi/pulumi/runtime/invoke.js:72:33)
at Object.onReceiveStatus (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:1189:9)
at InterceptingListener._callNext (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/Users/christian/dev/infrastructure-live/gcp-solvvy-dev/api-cluster-dev-b/node_modules/grpc/src/client_interceptors.js:841:24)
gcp:container:NodePool (default-pool):
error: Plan apply failed: transport is closing
gcp:container:Cluster (api-cluster):
error: Plan apply failed: all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :64549: connect: connection refused"
versions:
"@pulumi/gcp": "0.16.1",
"@pulumi/kubernetes": "0.17.3",
"@pulumi/pulumi": "0.16.1"
pulumi CLI: 0.16.1
Somehow pulumi fails while attempting to move a project into a folder.
Versions:
pulumi CLI: v0.17.1
pulumi-gcp: 0.17.1
Runtime: nodejs
Error message:
* updating urn:pulumi:gcp-project-redacted::gcp-project-redacted::gcp:organizations/project:Project::redacted: 'org_id' and 'folder_id' cannot be both set.
Reproduction:
import * as gcp from '@pulumi/gcp';
const GCP_ORG_ID = 'foo';
const GCP_BILLING_ACCOUNT_ID = 'bar';
new gcp.organizations.Project(
'some-project-123',
{
name: 'some-project-123',
projectId: 'some-project-123',
billingAccount: GCP_BILLING_ACCOUNT_ID,
orgId: GCP_ORG_ID
}
);
import * as gcp from '@pulumi/gcp';
const GCP_ORG_ID = 'foo';
const GCP_BILLING_ACCOUNT_ID = 'bar';
// this new folder is supposed to contain all prod relevant subprojects underneath.
export const prodFolder = new gcp.organizations.Folder('prod-folder', {
displayName: 'prod',
parent: `organizations/${GCP_ORG_ID}`
});
new gcp.organizations.Project(
'some-project-123',
{
name: 'some-project-123',
projectId: 'some-project-123',
billingAccount: GCP_BILLING_ACCOUNT_ID,
folderId: prodFolder.id
}
);
The error mentioned above will occur. I couldn't really find a good workaround. Even when I move the project underneath the folder manually via the UI pulumi continues to show that error.
There are several properties in the GCP provider that must be globally unique but aren't name
and so don't enjoy our auto-naming suffixing. For instance, gcp.projects.IAMCustomRole
has roleId
and gcp.serviceAccount.Account
has accountId
. Although it's possible to use something like our @pulumi/random
package to add some safe randomness to the names, it'd be nicer if it happened automatically.
export const fooNetwork = gcp.compute.Network.get('foo', 'foo', { project })
returns:
Outputs:
+ fooNetwork: {
+ id : "foo"
+ project: "acme"
+ urn : "urn:pulumi:development::acme-infrastructure::gcp:compute/network:Network::foo"
}
The project exists but the network does not exist. I would expect this to fail to read
.
Current pulumi GCP is based on terraform google provider 1.16.2 which is months old and misses lots of features (terraform gcp provider is moving fast these days!), please pull in latest terraform google provider (1.18.0+).
Related slack conversation https://pulumi-community.slack.com/archives/C84L4E3N1/p1538015271000100
This issue pertains to certain resource types, for example gcp.projects.IAMMember
and the order of creation / deletion of those resources. It leads to inconsistencies between what was defined in code vs what's ending up as actual configuration / resources in the cloud provider.
Example:
import * as gcp from '@pulumi/gcp';
new gcp.projects.IAMMember(`my-id`, {
member: `user:[email protected]`,
role: 'roles/bigquery.user',
project: 'my-project'
});
pulumi up
. This will grant [email protected] the bigquery role.import * as gcp from '@pulumi/gcp';
new gcp.projects.IAMMember(`my-id-changed`, {
member: `user:[email protected]`,
role: 'roles/bigquery.user',
project: 'my-project'
});
pulumi up
.The user [email protected] gets granted the roleroles/bigquery.user
.
In step 4 pulumi first creates a new IAMMember with the resource id my-id-changed
and afterwards deletes the resource my-id
.
The end result is that the user does NOT get the role roles/bigquery.user
even though pulumi's state thinks that is the case.
This is probably a wider issue, either with the gcp provider or pulumi itself and IAMMember is just an example.
Fyi terraform replaces the resources in the correct order (i.e. delete old one first, create new one after).
We should add an equivalent of aws.serverless.Function
for GCP, as a higher-level way to provision a serverless Function for GCP.
This can build on top of examples like #12.
This can support building higher level libraries like gcp-serverless
and GCP support for https://github.com/pulumi/pulumi-cloud.
GCP managed zones have a default description of Managed by Terraform
. We should override this to Managed by Pulumi
as we do in AWS.
Occurrences:
Use Case:
Create a bucket in GCP to host a static website.
Details:
GCP requires that the bucket should be named as the domain name and that we create a CNAME record in the DNS to point it to google store. I am able to manually create the bucket from GCP Console, with the domain name say, for example, www.myawesomesite.com. However, when I try to create the same programmatically using Pulumi, I end up getting the following error.
NOTE: I am the verified owner of the domain, the domain itself is also verified and I have also already added Pulumi's service account email as the domain owner in Webmaster tools - https://www.google.com/webmasters/tools/dashboard.
Code:
const pulumi = require("@pulumi/pulumi");
const gcp = require('@pulumi/gcp');
const mime = require('mime');
const fs = require('fs');
const path = require('path');
const siteDir = path.join(__dirname, 'www');
const siteBucket = new gcp.storage.Bucket('pulumi-demo-bucket', {
name: 'www.myawesomesite.com',
websites: [
{
mainPageSuffix: 'index.html',
notFoundPage: '404.html'
}
]
});
const defaultAcl = new gcp.storage.BucketACL('pulumi-demo-acl', {
bucket: siteBucket,
defaultAcl: 'publicRead'
});
// For each file in the directory, create an object stored in `siteBucket`
fs.readdirSync(siteDir)
.forEach(item => {
let filePath = path.join(siteDir, item);
let object = new gcp.storage.BucketObject(item, {
bucket: siteBucket,
source: filePath,
contentType: mime.getType(filePath) || undefined,
});
});
// Stack exports
exports.bucketName = siteBucket.name;
ERROR:
$ pulumi update
Previewing update of stack 'plume-demo'
Previewing changes:
* pulumi:pulumi:Stack pulumi-demo-plume-demo running
+ gcp:storage:Bucket pulumi-demo-bucket create
+ gcp:storage:BucketACL pulumi-demo-acl create
+ gcp:storage:BucketObject favicon.png create
+ gcp:storage:BucketObject index.html create
info: 4 changes previewed:
+ 4 resources to create
1 resource unchanged
Updating stack 'plume-demo'
Performing changes:
* pulumi:pulumi:Stack pulumi-demo-plume-demo running
+ gcp:storage:Bucket pulumi-demo-bucket creating
+ gcp:storage:Bucket pulumi-demo-bucket creating 1 error. error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
+ gcp:storage:Bucket pulumi-demo-bucket **creating failed** 1 error. error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
+ gcp:storage:Bucket pulumi-demo-bucket **creating failed** 2 errors. error: update failed
* pulumi:pulumi:Stack pulumi-demo-plume-demo done
Diagnostics:
gcp:storage:Bucket: pulumi-demo-bucket
error: Plan apply failed: creating urn:pulumi:plume-demo::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo-bucket: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
error: update failed
info: no changes required:
1 resource unchanged
Test driving Pulumi on GCP, I used the following config:
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
const vm = new gcp.compute.Instance("pulumi-test", {
bootDisk: {
initializeParams: {
image: "cos-cloud/cos-stable",
},
},
deletionProtection: true,
machineType: "g1-small",
networkInterfaces: [{
accessConfigs: [{}],
network: "default",
}],
zone: "us-central1-a",
name: "pulumi-test",
});
Upon applying this config, I end up with a GCE VM that has automatic restarts turned off. Specifically the property scheduling.automaticRestart
is false
.
The default for GCE is true
, meaning that if the platform has to halt the VM for whatever reason it'll restart it. This feels like a landmine in Pulumi, where it's deviating from the platform default settings.
I believe the default should be to set automaticRestart=true
, to match the platform's recommended default. WDYT?
In the current cluster object, we can use the cluster
props: certificateAuthority
, name
, and endpoint
to fill in the blanks in the kubeconfig file https://gist.github.com/b01eeecacccc3e284771463ed626af5e
we only do this right now in the object constructor, when we should actually have a kubeconfig()
method on the object.
see community slack: https://pulumi-community.slack.com/archives/C84L4E3N1/p1550871454193500?thread_ts=1550862777.160400&cid=C84L4E3N1
related: pulumi/pulumi-aws#478
After I run pulumi up and create:
1 network
1 subnetwork (with multiple cidrs)
1 GKE cluster
2 Node Pools
calling pulumi refresh followed by pulumi up recreates the cluster and node pools, makes additional changes to the subnetwork.
config.ts:
import { Config } from "@pulumi/pulumi";
import * as pulumi from "@pulumi/pulumi";
const config = new Config();
export const name = pulumi.getStack();
export const maintenanceWindow = config.get("startTime") || "09:00";
export const gkeVersion = config.get("gkeVersion") || "1.12.6-gke.7";
export const mainCIDR = config.get("mainCIDR") || "10.0.0.0/16";
export const clusterCIDR = config.get("clusterCIDR") || "10.1.0.0/16";
export const serviceCIDR = config.get("serviceCIDR") || "10.2.0.0/16";
export const coreMin = config.getNumber("coreMin") || 2;
export const coreInitial = config.getNumber("coreInitial") || 2;
export const coreType = config.get("coreType") || "n1-standard-1";
export const coreMax = config.getNumber("coreMax") || 10;
export const coreDisk = config.getNumber("coreDisk") || 100;
export const siteMin= config.getNumber("siteMin") || 1;
export const siteInitial = config.getNumber("siteInitial") || 1;
export const siteType = config.get("siteType") || "n1-standard-1";
export const siteMax = config.getNumber("siteMax") || 10;
export const siteDisk = config.getNumber("siteDisk") || 100;
index.ts:
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import {
name,
maintenanceWindow,
gkeVersion,
coreInitial,
coreMax,
coreMin,
coreType,
coreDisk,
siteInitial,
siteMax,
siteMin,
siteType,
siteDisk,
mainCIDR,
serviceCIDR,
clusterCIDR
} from "./config";
export const vpc = new gcp.compute.Network(name, {
name: name,
autoCreateSubnetworks: false
});
const subnet = new gcp.compute.Subnetwork(name, {
name: name,
network: vpc.selfLink,
ipCidrRange: mainCIDR,
secondaryIpRanges: [
{
ipCidrRange: serviceCIDR,
rangeName: "services",
},
{
ipCidrRange: clusterCIDR,
rangeName: "pods",
}]
}, {
dependsOn: vpc,
parent: vpc
});
const k8sCluster = new gcp.container.Cluster("gke-cluster",
{
name: name,
network: vpc.selfLink,
subnetwork: subnet.selfLink,
minMasterVersion: gkeVersion,
nodeVersion: gkeVersion,
loggingService: "logging.googleapis.com/kubernetes",
monitoringService: "monitoring.googleapis.com/kubernetes",
addonsConfig: {
httpLoadBalancing: {
disabled: false
}
},
maintenancePolicy: {
dailyMaintenanceWindow: {
startTime: maintenanceWindow
}
},
ipAllocationPolicy: {
servicesSecondaryRangeName: "services",
clusterSecondaryRangeName: "pods"
},
networkPolicy: {
provider: "CALICO",
enabled: true
},
nodePools: [{name: "default-pool",}],
removeDefaultNodePool: true,
},
{
deleteBeforeReplace: true,
parent: subnet,
dependsOn: subnet
}
);
//Setup NodePools
export const corePool = new gcp.container.NodePool("corePool", {
name: "core",
cluster: k8sCluster.name,
initialNodeCount: coreInitial,
nodeConfig: {
machineType: coreType,
diskSizeGb: coreDisk,
diskType: "pd-standard",
oauthScopes: [
"https://www.googleapis.com/auth/cloud-platform"
],
imageType: "COS",
},
autoscaling: {
maxNodeCount: coreMax,
minNodeCount: coreMin
},
management: {
autoUpgrade: true,
autoRepair: true
},
version: gkeVersion
}, {
deleteBeforeReplace: true,
parent: k8sCluster,
dependsOn: k8sCluster
});
const sitePool = new gcp.container.NodePool("sitePool", {
name: "sites",
cluster: k8sCluster.name,
initialNodeCount: siteInitial,
nodeConfig: {
machineType: siteType,
diskSizeGb: siteDisk,
diskType: "pd-standard",
oauthScopes: [
"https://www.googleapis.com/auth/cloud-platform"
],
imageType: "COS",
},
autoscaling: {
minNodeCount: siteMin,
maxNodeCount: siteMax
},
management: {
autoRepair: true,
autoUpgrade: true
},
version: gkeVersion
},{
parent: k8sCluster,
deleteBeforeReplace: true,
});
Refresh on existing resources:
~ └─ gcp:container:Cluster gke-cluster update [diff: +nodeConfig~instanceGroupUrls,nodePools]
├─ gcp:container:NodePool corePool
├─ pulumi:providers:kubernetes gkeK8s
└─ gcp:container:NodePool sitePool
Update Run
+- └─ gcp:container:Cluster gke-cluster replace [diff: -additionalZones,clusterAutoscaling,clusterIpv4Cidr,defaultMaxPodsPerNode,masterAuth,nodeConfig,project,zone~addonsConfig,ipAllocationPolicy,maintenancePolicy,network,nodePools,subnetw
+- ├─ gcp:container:NodePool sitePool replace [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig]
+- ├─ gcp:container:NodePool corePool replace [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig]
~ └─ pulumi:providers:kubernetes gkeK8s update [
diff: ~kubeconfig]
Refresh after "Console Upgrade of cluster"
~ └─ gcp:container:Cluster gke-cluster update [diff: +nodeConfig~instanceGroupUrls,masterVersion,nodePools,nodeVersion]
├─ pulumi:providers:kubernetes gkeK8s
~ ├─ gcp:container:NodePool sitePool update [diff: ~nodeCount,version]
~ └─ gcp:container:NodePool corePool update [diff: ~version]
Update Plan:
pulumi:pulumi:Stack cluster-repro
└─ gcp:compute:Network repro [diff: -description,ipv4Range,project,routingMode]
└─ gcp:compute:Subnetwork repro [diff: -description,enableFlowLogs,privateIpGoogleAccess,project,region]
+- └─ gcp:container:Cluster gke-cluster replace [diff: -additionalZones,clusterAutoscaling,clusterIpv4Cidr,defaultMaxPodsPerNode,masterAuth,nodeConfig,project,zone~addonsConfig,ipAllocationPolicy,maintenancePolicy,network,nodePools,nodeVer
+- ├─ gcp:container:NodePool corePool replace [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig,version]
+- ├─ gcp:container:NodePool sitePool replace [diff: -maxPodsPerNode,namePrefix,nodeCount,project,zone~nodeConfig,version]
~ └─ pulumi:providers:kubernetes gkeK8s update [diff: ~kubeconfig]
From a user on the Pulumi Community Slack:
Yesterday, i recreated a K8s cluster to use some new features from the latest version of pulumi/gcp
. After recreating the cluster, i started to see the following error message regularly:
kubernetes:core:ConfigMap (api-config-map):
warning: The provider for this resource has inputs that are not known during preview.
This preview may not correctly represent the changes that will be applied during an update.
The error is inconsistent and when it happens, a new cluster is created and the previous one is marked to be deleted, what shouldn’t be possible because it has the flag protect
.
The k8s resources from the previous cluster are moved to the new cluster instantly, but they’re not created in the new cluster actually. Also, to make any updates to the stack, the previous cluster needs to be deleted.
After downgrading this project back to the version 17.1 of the pulumi/gcp
package, i literally had the same issue.
Another cluster that is running this same version pulumi/gcp
version is working properly. Both clusters are using the same Pulumi version (17.2).
What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes. Don’t know if that’s what’s happening there.
I tried to create this same cluster about 10 times. I had issues in all of them.
Ah… they have something in common. All the new clusters that are having this issue are using one of the latest Kubernetes versions available in GCP (1.12.5-gke.5).
Users of the latest release of this provider have noted that they see errors when creating gcp.container.Cluster
resources:
Invalid address to set: []string{"cluster_autoscaling"}
This will happen even after downgrading back to the 0.16.3
version of the NPM package because it affects the provider binary itself which will always load latest. pulumi plugin rm resource gcp 0.16.4
can likely fix this in the short term.
Modifying the ipConfiguration setting for an existing DatabaseInstance triggers a replace which obviously results in data loss:
const instance = new gcp.sql.DatabaseInstance("master", {
databaseVersion: "POSTGRES_9_6",
settings: {
tier: "db-f1-micro",
ipConfiguration: {
authorizedNetworks: [
{
name: "VPN",
value: "1.2.3.4",
},
],
},
},
});
I would expect this to simply update the existing resource's authorised networks.
I need to get the available zones in a region provided that the compute service has already being enabled in the project.
const services = new gcp.projects.Services('services', {
services: ['compute.googleapis.com'],
});
const available = pulumi.output(gcp.compute.getZones({}));
How can I make sure that getZones runs once the service has been enabled. I don't see a way of handling the dependency.
Thx
From the community slack:
In prototyping I am tearing down app and infrastructure (cluster, db) stacks, but leaving up the identity stack (gcp). It seems roles are disappearing from BOTH pulumi related gcp service accounts, and NON-pulumi related service accounts. We have not seen this issue prior to pulumi and so I am correlating it with my activity in the same gcp project.
https://gist.github.com/rosskevin/00f05766829a9b45888c508949399f0a
One [role assignment] was pulumi, one was:
gcloud iam service-accounts create ${BUILD_SA} \
--project=${GOOGLE_CLOUD_PROJECT} \
--display-name ${BUILD_SA}
gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
--role roles/storage.admin \
--member serviceAccount:${BUILD_SA_EMAIL}
@lukehoban I am new to Pulumi and I have been trying to play around with it but nothing seems to work. What am I doing wrong here? I just wanted to create a bucket but I keep getting the error project: required field is not set
. I even tried setting the gcp.config.project=PROJECT_NAME
but even that didnt work.
How can I debug issues like these?
const gcp = require('@pulumi/gcp');
const bucket = new gcp.storage.Bucket('pulumi-demo');
// Stack exports
exports.bucketName = bucket.bucket;
Diagnostics:
gcp:storage:Bucket: pulumi-demo
error: Plan apply failed: creating urn:pulumi:pulumi-demo-dev::pulumi-demo::gcp:storage/bucket:Bucket::pulumi-demo: project: required field is not set
error: update failed
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.