Comments (10)
I do have the same problem. I think 8be6a89 introduced a regression.
This is the relevant part of my Terraform plan:
-/+ module.gke.google_container_node_pool.pools (new resource required)
id: "europe-west4/sproto/default-node-pool" => <computed> (forces new resource)
autoscaling.#: "1" => "1"
autoscaling.0.max_node_count: "100" => "100"
autoscaling.0.min_node_count: "1" => "1"
cluster: "sproto" => "sproto"
initial_node_count: "1" => "1"
instance_group_urls.#: "3" => <computed>
location: "europe-west4" => <computed>
management.#: "1" => "1"
management.0.auto_repair: "true" => "true"
management.0.auto_upgrade: "true" => "true"
max_pods_per_node: "110" => <computed>
name: "default-node-pool" => "default-node-pool"
name_prefix: "" => <computed>
node_config.#: "1" => "1"
node_config.0.disk_size_gb: "100" => "100"
node_config.0.disk_type: "pd-standard" => "pd-standard"
node_config.0.guest_accelerator.#: "0" => "1" (forces new resource)
node_config.0.guest_accelerator.0.count: "" => "0" (forces new resource)
from terraform-google-kubernetes-engine.
This looks like it's caused by #114. Could you try the latest master
version and confirm if you're still seeing the issue?
from terraform-google-kubernetes-engine.
Tryed to use master
version but I have another error (see: #27 (comment)).
Waiting
from terraform-google-kubernetes-engine.
I confirm that using tag v2.1.0 where that commit is not present, I cant reproduce the issue.
from terraform-google-kubernetes-engine.
@alexkonkin please check the above comments. Looks like #157 introduced a regression.
thank you
from terraform-google-kubernetes-engine.
I'm guessing this is an upstream provider issue, I have opened a provider bug: hashicorp/terraform-provider-google#3786
from terraform-google-kubernetes-engine.
Resolved for me with these providers:
provider "google" {
credentials = "${file("/path/to/credentials.json")}"
project = "${var.project_id}"
region = "${var.region}"
zone = "${var.zone}"
version = "~> 2.7"
}
provider "google-beta" {
credentials = "${file("/path/to/credentials.json")}"
project = "${var.project_id}"
region = "${var.region}"
zone = "${var.zone}"
version = "~> 2.7"
}
in 2.1.0 version of this module.
from terraform-google-kubernetes-engine.
@g0blin79 Can you confirm that master
is currently working for you as well?
from terraform-google-kubernetes-engine.
Yes it does. I created a zonal cluster two weeks ago with that providers versions and with the 2.1.0 version of this module and it is working.
from terraform-google-kubernetes-engine.
Excellent, thank you.
from terraform-google-kubernetes-engine.
Related Issues (20)
- Unable to apply anthos service mesh module facing ERROR: (gcloud.container.clusters.get-credentials) You do not currently │ have an active account selected. HOT 3
- No Option for specifying image_type in auto_provisioning_defaults HOT 2
- Missing `node_pools` variable in beta-private-cluster documentation HOT 1
- Add upgrade_settings for NAP created node pools HOT 1
- Add option to attach roles just on the created SA instead of at the project level in workload-identity module HOT 1
- enable managed data plane annotation on ASM submodule HOT 1
- getting TLS handshake error. HOT 1
- Cannot create Node pool - Error 400 HOT 1
- Add `enable_l4_ibl_subsetting` for `safer-cluster-update-variant` clusters HOT 1
- Consider using provider-defined functions to simplify data transformations in modules HOT 1
- For nap configuration under beta-private-cluster-update-variant - can't attach service account from the code
- workload-identity module triggers unnecessary deletion and re-creation of IAM binding HOT 1
- `master_ipv4_cidr_block` is not optional on private autopilot cluster if `add_cluster_firewall_rules` is true HOT 1
- Missing update-variant keepers, breaking gpu_driver_version update HOT 2
- Can't disable managed promethues and all logging and monitoring HOT 1
- Make service range optional HOT 2
- Blocks of type "secondary_boot_disks" are not expected here HOT 2
- Adding labels to a node pool should not cause it to be recreated HOT 1
- Add missing services_ipv4_cidr_block to resource "google_container_cluster" HOT 1
- Beta-private-cluster Unreachable Agent - please check if GKE Connect Agent is deployed correctly.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from terraform-google-kubernetes-engine.