Giter Club home page Giter Club logo

Comments (20)

liafizan avatar liafizan commented on August 16, 2024 4

I have the same issue now. Trying to deploy a GKE cluster and keep getting this error.

Error: module.gke.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Taint option does not seem to be available now

I removed the taint option from the local module and it worked fine. I do not see this option in the documentation for node-pool . However, the upstream API-Doc still has reference to taints.

Also, as the error suggests to use Beta provider and I did it but from my understanding this option has been deprecated

from terraform-google-kubernetes-engine.

tommyknows avatar tommyknows commented on August 16, 2024 2

Same for me with a minimal config:

provider "google-beta" {
  project = "${var.project_id}"
  region  = "${var.region}"
}


module "gke" {
  source               = "terraform-google-modules/kubernetes-engine/google"
  project_id           = "${var.project_id}"
  name                 = "${var.cluster_name}"
  region               = "${var.region}"
  zones                = ["${var.cluster_zone}"]
  network              = "${var.network_name}"
  subnetwork           = "${var.subnetwork_name}"
  ip_range_pods        = "${var.ip_range_pods}"
  ip_range_services    = "${var.ip_range_services}"
  service_account      = "[SERVICE ACCOUNT NAME]"
  kubernetes_dashboard = true
}

terraform validate gives me:

Error: module.gke.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

from terraform-google-kubernetes-engine.

ogreface avatar ogreface commented on August 16, 2024 1

Ok, so the documentation states that you need version 1.8 of the provider, and that is the supported configuration.

Software Dependencies
Kubectl
kubectl 1.9.x
Terraform and Plugins
Terraform 0.11.x
terraform-provider-google v1.8.0

I can confirm that the examples work on with the provider pinned to 1.8.

Using the 2.0 version of the provider is unsupported. If you are using that, you do need to be on the beta version of the Google provider. This configuration seems to work for me.

provider "google-beta" {
  version = "~> 2.0.0"
  project = "${var.project_id}"
  region  = "${var.region}"
}

module "gke" {
  providers = {
    google ="google-beta"
  }
  source                 = "terraform-google-modules/kubernetes-engine/google"
  project_id                 = "${var.project_id}"
  name                       = "issue27-test-cluster"
  region                     = "us-east4"
  zones                      = ["us-east4-a"]
  network                    = "${var.network}"
  subnetwork                 = "${var.subnetwork}"
  ip_range_pods              = "${var.ip_range_pods}"
  ip_range_services          = "${var.ip_range_services}"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
}

Note that the only difference is you have to explicitly pass the beta provider into the module, so that it inherits correctly.

from terraform-google-kubernetes-engine.

ogreface avatar ogreface commented on August 16, 2024 1

@wadadli That TF pretty much works for me in terms of validation. Are you specifying the provider that's being passed in?

provider "google-beta" {
  version = "2.0.0"
  project = "${var.project_id}"
  region  = "${var.region}"
}

module "gke" {
  providers = {
    google = "google-beta"
  }

  source                     = "../../"
  name                       = "vault-foobar"
  project_id                 = "${var.project_id}"
  network                    = "management"
  subnetwork                 = "mgmt-private-01"
  region                     = "${var.region}"
  zones                      = "${var.zones}"
  ip_range_pods              = "kubernetes-pods"
  ip_range_services          = "kubernetes-services"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
  kubernetes_version         = "1.11.2-gke.18"

  node_pools = [
    {
      name         = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count    = 1
      max_count    = 10
      disk_size_gb = 100
      disk_type    = "pd-standard"
      image_type   = "COS"
      auto_repair  = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}
data "google_client_config" "default" {}

from terraform-google-kubernetes-engine.

ogreface avatar ogreface commented on August 16, 2024

Hi @AdrienWalkowiak,

Is this still an issue for you? I've been unable to duplicate the issue. I'm able to correctly plan and apply using just about the same terraform you pasted. I've put what I'm using below.

If this is still an issue you're running into, could you send me your whole project?

Best,

Rishi

My main.tf

module "gke" {
  source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"

  project_id        = "${var.project_id}"
  name              = "deploy-service-cluster"
  region            = "${var.region}"
  network           = "${var.network}"
  subnetwork        = "${var.subnetwork}"
  ip_range_pods     = "${var.ip_range_pods}"
  ip_range_services = "${var.ip_range_services}"
  http_load_balancing = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard = true
  network_policy = true
  kubernetes_version = "1.11.2-gke.18"



  node_pools = [
    {
      name = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count = 1
      max_count = 10
      disk_size_gb = 100
      disk_type = "pd-standard"
      image_type = "COS"
      auto_repair = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key = "default-node-pool"
        value = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

My vars

/**
 * Copyright 2018 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

variable "project_id" {
  description = "The project ID to host the cluster in (required)"
}

variable "name" {
  description = "The name of the cluster (required)"
}

variable "description" {
  description = "The description of the cluster"
  default     = ""
}

variable "regional" {
  description = "Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!)"
  default     = true
}

variable "region" {
  description = "The region to host the cluster in (required)"
}

variable "zones" {
  type        = "list"
  description = "The zones to host the cluster in (optional if regional cluster / required if zonal)"
  default     = []
}

variable "network" {
  description = "The VPC network to host the cluster in (required)"
}

variable "network_project_id" {
  description = "The project ID of the shared VPC's host (for shared vpc support)"
  default     = ""
}

variable "subnetwork" {
  description = "The subnetwork to host the cluster in (required)"
}

variable "kubernetes_version" {
  description = "The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region."
  default     = "1.10.6"
}

variable "node_version" {
  description = "The Kubernetes version of the node pools. Defaults kubernetes_version (master) variable and can be overridden for individual node pools by setting the version key on them. Must be empyty or set the same as master at cluster creation."
  default     = ""
}

variable "master_authorized_networks_config" {
  type = "list"

  description = <<EOF
  The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists)

  ### example format ###
  master_authorized_networks_config = [{
    cidr_blocks = [{
      cidr_block   = "10.0.0.0/8"
      display_name = "example_network"
    }],
  }]

  EOF

  default = []
}

variable "horizontal_pod_autoscaling" {
  description = "Enable horizontal pod autoscaling addon"
  default     = false
}

variable "http_load_balancing" {
  description = "Enable httpload balancer addon"
  default     = true
}

variable "kubernetes_dashboard" {
  description = "Enable kubernetes dashboard addon"
  default     = false
}

variable "network_policy" {
  description = "Enable network policy addon"
  default     = false
}

variable "maintenance_start_time" {
  description = "Time window specified for daily maintenance operations in RFC3339 format"
  default     = "05:00"
}

variable "ip_range_pods" {
  description = "The secondary ip range to use for pods"
}

variable "ip_range_services" {
  description = "The secondary ip range to use for pods"
}

variable "node_pools" {
  type        = "list"
  description = "List of maps containing node pools"

  default = [
    {
      name = "default-node-pool"
    },
  ]
}

variable "node_pools_labels" {
  type        = "map"
  description = "Map of maps containing node labels by node-pool name"

  default = {
    all               = {}
    default-node-pool = {}
  }
}

variable "node_pools_taints" {
  type        = "map"
  description = "Map of lists containing node taints by node-pool name"

  default = {
    all               = []
    default-node-pool = []
  }
}

variable "node_pools_tags" {
  type        = "map"
  description = "Map of lists containing node network tags by node-pool name"

  default = {
    all               = []
    default-node-pool = []
  }
}

variable "stub_domains" {
  type        = "map"
  description = "Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server"
  default     = {}
}

variable "non_masquerade_cidrs" {
  type        = "list"
  description = "List of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading."
  default     = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}

variable "ip_masq_resync_interval" {
  description = "The interval at which the agent attempts to sync its ConfigMap file from the disk."
  default     = "60s"
}

variable "ip_masq_link_local" {
  description = "Whether to masquerade traffic to the link-local prefix (169.254.0.0/16)."
  default     = "false"
}

variable "logging_service" {
  description = "The logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and none"
  default     = "logging.googleapis.com"
}

variable "monitoring_service" {
  description = "The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none"
  default     = "monitoring.googleapis.com"
}

And my outputs

/**
 * Copyright 2018 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
output "name_example" {
  description = "Cluster name"
  value       = "${module.gke.name}"
}

output "endpoint_example" {
  sensitive   = true
  description = "Cluster endpoint"
  value       = "${module.gke.endpoint}"
}

output "location_example" {
  description = "Cluster location"
  value       = "${module.gke.location}"
}

output "zones_example" {
  description = "List of zones in which the cluster resides"
  value       = "${module.gke.zones}"
}

output "node_pools_names_example" {
  value = "${module.gke.node_pools_names}"
}

output "node_pools_versions_example" {
  value = "${module.gke.node_pools_versions}"
}

from terraform-google-kubernetes-engine.

AdrienWalkowiak avatar AdrienWalkowiak commented on August 16, 2024

Thank you for checking. I tried using your code and it seems to go past the error so I will close this issue and see what' wrong on my end, probably a syntax issue.

Thanks

from terraform-google-kubernetes-engine.

zbutt-muvaki avatar zbutt-muvaki commented on August 16, 2024

this is definitely an issue

i get following error

Warning: module.kubernetes-engine.google_container_node_pool.pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Warning: module.kubernetes-engine.google_container_node_pool.zonal_pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Error: module.kubernetes-engine.google_container_node_pool.pools: node_config.0.taint: should be a list



Error: module.kubernetes-engine.google_container_node_pool.zonal_pools: node_config.0.taint: should be a list

my config is pretty straight forward

module "kubernetes-engine" {
    source   = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"

    project_id                 = "${var.project-id}"
    name                       = "${var.environment}-gke"

    region            = "us-central1"
    network           = "${module.vpc.name}"
    subnetwork        = "us-central1"
    service_account   = "${module.gke-sa.email}"

    kubernetes_version = "1.11.6-gke.2"

    http_load_balancing        = true
    horizontal_pod_autoscaling = true

    # enables vpc-nativ
    ip_range_pods       = "${var.environment}-gke-pod-range"
    ip_range_services   = "${var.environment}-gke-services-range"

    remove_default_node_pool = "true"

    node_pools = [
        {
            name            = "standard"
            machine_type    = "n1-standard-4"
            min_count       = 1
            max_count       = 10
            disk_size_gb    = 100
            disk_type       = "pd-standard"
            image_type      = "COS"
            auto_repair     = true
            auto_upgrade    = true
            service_account = "${module.gke-sa.email}"
            preemptible     = false
        },
    ]

    node_pools_labels = {
        all = {
            type = "gke"
            provisioner = "terraform"
        }

        standard = {
            node_cluster = "standard"
        }
    }

    node_pools_taints = {
        all = []
        standard = []
    }

    # control firewall for nodes via this tag
    node_pools_tags = {
        all = [
            "gke-cluster"
        ]
    }
}

I am thinking this should fix it ... needs to be applied in both regional.tf and zonal.tf

taint        = ["${concat(var.node_pools_taints["all"], var.node_pools_taints[lookup(var.node_pools[count.index], "name")])}"]

from terraform-google-kubernetes-engine.

deenski avatar deenski commented on August 16, 2024

Same here:

module "kubernetes-engine" {
  source  = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"
  project_id                 = "${var.project_id}"
  name                       = "${var.cluster_name}"
  region                     = "${var.region}"
  zones                      = "${var.zones}"
  network                    = "${var.network}"
  subnetwork                 = "${var.subnetwork}"
  ip_range_pods              = "${var.ip_range_pods}"
  ip_range_services          = "${var.ip_range_services}"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true

  node_pools = [
    {
      name            = "default-node-pool"
      machine_type    = "${var.default_node_pool_instance_type}"
      min_count       = "${var.default_node_pool_min_count}"
      max_count       = "${var.default_node_pool_max_count}"
      disk_size_gb    = 20
      disk_type       = "pd-ssd"
      image_type      = "COS"
      auto_repair     = true
      auto_upgrade    = true
      service_account = "${var.default_node_pool_service_account}"
      preemptible     = false
    },
  ]
  node_pools_tags = "${var.node_pool_tags}"
}

plan output:

Error: module.kubernetes-engine.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

from terraform-google-kubernetes-engine.

ogreface avatar ogreface commented on August 16, 2024

@deenski @tommyknows @faizan82 Can any of you let me know what version of the provider you're using? I can confirm the issue on 2.0.

from terraform-google-kubernetes-engine.

deenski avatar deenski commented on August 16, 2024

Can confirm, I was on the 2.0 version. The configuration @ogreface provided also works for me.

Edit: sorry for the delay.

from terraform-google-kubernetes-engine.

wadadli avatar wadadli commented on August 16, 2024

@ogreface -- we're giving this a shot right now. We even tried to explicitly pin to 2.0.0 of the beta provider and no dice. Seems this is now completely borked?

from terraform-google-kubernetes-engine.

ogreface avatar ogreface commented on August 16, 2024

@wadadli Could you paste your code? Happy to take a look, but the example above still seems to work for me.

from terraform-google-kubernetes-engine.

wadadli avatar wadadli commented on August 16, 2024

Here's the tf that is resulting in the following error

Error: module.vault_kubernetes.google_container_node_pool.pools: node_config.0.tags: should be a list
module "vault_kubernetes" {
  providers = {
    google = "google-beta"
  }

  source                     = "terraform-google-modules/kubernetes-engine/google"
  name                       = "vault-${random_id.id.hex}"
  project_id                 = "${module.management_project.project_id}"
  network                    = "management"
  subnetwork                 = "mgmt-private-01"
  region                     = "us-east4"
  zones                      = ["us-west4-a", "us-west4-b", "us-west4-c"]
  ip_range_pods              = "kubernetes-pods"
  ip_range_services          = "kubernetes-services"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
  kubernetes_version         = "1.11.2-gke.18"

  node_pools = [
    {
      name         = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count    = 1
      max_count    = 10
      disk_size_gb = 100
      disk_type    = "pd-standard"
      image_type   = "COS"
      auto_repair  = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

We have tried adding the

provider "google-beta" {}

to both _init.tf and within the the same tf file as the resource above.

from terraform-google-kubernetes-engine.

g0blin79 avatar g0blin79 commented on August 16, 2024

I still have this issue.

I'm using master code of this repo, downloaded some minutes ago.
This is my terraform cluster definition using this module:

module "kubernetes-cluster" {
  providers = {
    google = "google-beta"
  }

  source  = "github.com/terraform-google-modules/terraform-google-kubernetes-engine?ref=master"
  project_id         = "${var.project_id}"
  name               = "${var.cluster_name}"
  regional           = false
  region             = "${var.region}"
  zones              = ["${var.zone}"]
  network            = "${var.network_name}"
  subnetwork         = "${var.network_name}-subnet-01"
  ip_range_pods      = "${var.network_name}-pod-secondary-range"
  ip_range_services  = "${var.network_name}-services-secondary-range"
  kubernetes_version = "${var.kubernetes_version}"
  node_version       = "${var.kubernetes_version}"
  remove_default_node_pool = true
  disable_legacy_metadata_endpoints = "false"
  service_account = "create"

  node_pools = [
    {
      name            = "forge-pool"
      machine_type    = "n1-standard-2"
      min_count       = 1
      max_count       = 3
      disk_size_gb    = 100
      disk_type       = "pd-standard"
      image_type      = "COS"
      auto_repair     = true
      auto_upgrade    = false
      service_account = "${module.kubernetes-cluster.service_account}"
    },
  ]

  node_pools_metadata = {
    all = {}

    forge-pool = {}
  }

  node_pools_labels = {
    all = {}

    forge-pool = {
      forge-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    forge-pool = [
      {
        key    = "forge-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    forge-pool = [
      "forge-pool",
    ]
  }
}

This is my provider

provider "google" {
  credentials = "${file("/root/.config/gcloud/application_default_credentials.json")}"
  project = "${var.project_id}"
  region = "${var.region}"
  zone = "${var.zone}"
  version = "~> 2.2"
}

provider "google-beta" {
  credentials = "${file("/root/.config/gcloud/application_default_credentials.json")}"
  project = "${var.project_id}"
  region = "${var.region}"
  zone = "${var.zone}"
  version = "~> 2.2"
}

Still having this errors:

Error: module.kubernetes-cluster.google_container_node_pool.pools: node_config.0.taint: should be a list

Error: module.kubernetes-cluster.google_container_node_pool.zonal_pools: node_config.0.taint: should be a list

Is there some mistake I made?

from terraform-google-kubernetes-engine.

thiagonache avatar thiagonache commented on August 16, 2024

Folks, I think I've found the issue. You must use specifically a variable. If you use local or module output it does fail.

node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 0
      max_count          = 1
      disk_size_gb       = 100
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "${local.default_service_account}"
      preemptible        = false
      initial_node_count = 0
    },
  ]

Does not work

node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 0
      max_count          = 1
      disk_size_gb       = 100
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "${var.default_service_account}"
      preemptible        = false
      initial_node_count = 0
    },
  ]

works

from terraform-google-kubernetes-engine.

aaron-lane avatar aaron-lane commented on August 16, 2024

Hi all. I apologize for the persistence of this issue. A workaround in addition to the one shared by @thiagonache is to allow the module to create a dedicated service account for the cluster:

module "kubernetes_engine" {
  # ...
  service_account = "create"
}

from terraform-google-kubernetes-engine.

kopachevsky avatar kopachevsky commented on August 16, 2024

Not reproducible.
Tested on TF12 with google/google-beta provider v2.9.0
For test I did create 2 node-pools, one with service account name from other module output, second from locals:

locals {
  cluster_type = "node-pool"
  mysa = "[email protected]"
}

provider "google" {
  version = "~> 2.9.0"
  region  = var.region
}

provider "google-beta" {
  version = "~> 2.9.0"
  region  = var.region
}

module "sa" {
    source = "./sa"
}

module "gke" {
  source                            = "../terraform-google-kubernetes-engine"
  project_id                        = var.project_id
  name                              = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
  regional                          = false
  region                            = var.region
  zones                             = var.zones
  network                           = var.network
  subnetwork                        = var.subnetwork
  ip_range_pods                     = var.ip_range_pods
  ip_range_services                 = var.ip_range_services
  remove_default_node_pool          = true
  disable_legacy_metadata_endpoints = false

   node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 2
      service_account = module.sa.name
      auto_upgrade    = false
    },
    {
      name            = "pool-02"
      min_count       = 1
      max_count       = 2
      service_account = local.mysa
      auto_upgrade    = false
    },
  ]

Module file sa.tf:

locals {
    prefix = "xxxxxx"
    suffix = "xxxxxx.iam.gserviceaccount.com"
}

output "name" {
  value = "${local.prefix}@${local.suffix}"
}

from terraform-google-kubernetes-engine.

morgante avatar morgante commented on August 16, 2024

@kopachevsky Please attempt to reproduce when you include the SA in the same config as your module invocation.

ie.

resource "google_service_account" "gke" {
  account_id   = "object-viewer"
  display_name = "Object viewer"
}

module "gke" {
  source                            = "../terraform-google-kubernetes-engine"
  project_id                        = var.project_id
  name                              = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
  regional                          = false
  region                            = var.region
  zones                             = var.zones
  network                           = var.network
  subnetwork                        = var.subnetwork
  ip_range_pods                     = var.ip_range_pods
  ip_range_services                 = var.ip_range_services
  remove_default_node_pool          = true
  disable_legacy_metadata_endpoints = false

   node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 2
      service_account = module.sa.name
      auto_upgrade    = false
    },
    {
      name            = "pool-02"
      min_count       = 1
      max_count       = 2
      service_account = google_service_account.gke.email
      auto_upgrade    = false
    },
  ]

from terraform-google-kubernetes-engine.

kopachevsky avatar kopachevsky commented on August 16, 2024

@morgante this schenario working fine, tested several times, see gist working for me https://gist.github.com/kopachevsky/6152449ac8e2a177e0759564915ed84f

So dynamic service account definition in node_pool parameter works:

 node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 1 
      service_account = google_service_account.sa.email
      auto_upgrade    = false
      auto_repair     = false
    },
  ]

But if I set service account for default pool as top level service_account parameter:

module "gke" {
  source                       = "../terraform-google-kubernetes-engine"
  project_id                   = "gl-akopachevskyy-gke"
  initial_node_count           = 1
  service_account              =  google_service_account.gke.email

I'm getting following error

Error: Invalid count argument

  on .terraform/modules/gke/sa.tf line 37, in resource "google_service_account" "cluster_service_account":
  37:   count        = var.service_account == "create" ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

Possible solution here is to add new parameter boolean create_service_account, true by default and use it in form:

module "gke" {
  source                   =  "../terraform-google-kubernetes-engine"
  project_id               =  "gl-akopachevskyy-gke"
  initial_node_count       =  1
  create_service_account   =  false
  service_account           =  google_service_account.gke.email
  //..other props
}

What do you think about?

from terraform-google-kubernetes-engine.

morgante avatar morgante commented on August 16, 2024

@kopachevsky That sounds good to me.

from terraform-google-kubernetes-engine.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.