Giter Club home page Giter Club logo

terraform-google-lb-internal's Introduction

Internal Load Balancer Terraform Module

Modular Internal Load Balancer for GCE using forwarding rules.

Load Balancer Types

Compatibility

This module is meant for use with Terraform 1.3+ and tested using Terraform 1.3+. If you find incompatibilities using Terraform >=1.3, please open an issue.

Upgrading

The following guides are available to assist with upgrades:

Usage

module "gce-ilb" {
  source            = "GoogleCloudPlatform/lb-internal/google"
  version           = "~> 6.0"
  region            = var.region
  name              = "group2-ilb"
  ports             = ["80"]
  source_tags       = ["allow-group1"]
  target_tags       = ["allow-group2", "allow-group3"]

  health_check = {
    type                = "http"
    check_interval_sec  = 1
    healthy_threshold   = 4
    timeout_sec         = 1
    unhealthy_threshold = 5
    response            = ""
    proxy_header        = "NONE"
    port                = 80
    port_name           = "health-check-port"
    request             = ""
    request_path        = "/"
    host                = "1.2.3.4"
    enable_log          = false
  }

  backends = [
    {
      group       = module.mig2.instance_group
      description = ""
      failover    = false
    },
    {
      group       = module.mig3.instance_group
      description = ""
      failover    = false
    },
  ]
}

Inputs

Name Description Type Default Required
all_ports Boolean for all_ports setting on forwarding rule. The ports or all_ports are mutually exclusive. bool null no
backends List of backends, should be a map of key-value pairs for each backend, must have the 'group' key. list(any) n/a yes
connection_draining_timeout_sec Time for which instance will be drained number null no
create_backend_firewall Controls if firewall rules for the backends will be created or not. Health-check firewall rules are controlled separately. bool true no
create_health_check_firewall Controls if firewall rules for the health check will be created or not. If this rule is not present backend healthcheck will fail. bool true no
firewall_enable_logging Controls if firewall rules that are created are to have logging configured. This will be ignored for firewall rules that are not created. bool false no
global_access Allow all regions on the same VPC network access. bool false no
health_check Health check to determine whether instances are responsive and able to do work
object({
type = string
check_interval_sec = optional(number)
healthy_threshold = optional(number)
timeout_sec = optional(number)
unhealthy_threshold = optional(number)
response = optional(string)
proxy_header = optional(string)
port = optional(number)
port_name = optional(string)
request = optional(string)
request_path = optional(string)
host = optional(string)
enable_log = optional(bool)
})
n/a yes
ip_address IP address of the internal load balancer, if empty one will be assigned. Default is empty. string null no
ip_protocol The IP protocol for the backend and frontend forwarding rule. TCP or UDP. string "TCP" no
is_mirroring_collector Indicates whether or not this load balancer can be used as a collector for packet mirroring. This can only be set to true for load balancers that have their loadBalancingScheme set to INTERNAL. bool false no
labels The labels to attach to resources created by this module. map(string) {} no
name Name for the forwarding rule and prefix for supporting resources. string n/a yes
network Name of the network to create resources in. string "default" no
network_project Name of the project for the network. Useful for shared VPC. Default is var.project. string "" no
ports List of ports to forward to backend services. Max is 5. The ports or all_ports are mutually exclusive. list(string) null no
project The project to deploy to, if not set the default provider project is used. string "" no
region Region for cloud resources. string "us-central1" no
service_label Service label is used to create internal DNS name string null no
session_affinity The session affinity for the backends example: NONE, CLIENT_IP. Default is NONE. string "NONE" no
source_ip_ranges List of source ip ranges for traffic between the internal load balancer. list(string) null no
source_service_accounts List of source service accounts for traffic between the internal load balancer. list(string) null no
source_tags List of source tags for traffic between the internal load balancer. list(string) n/a yes
subnetwork Name of the subnetwork to create resources in. string "default" no
target_service_accounts List of target service accounts for traffic between the internal load balancer. list(string) null no
target_tags List of target tags for traffic between the internal load balancer. list(string) n/a yes

Outputs

Name Description
forwarding_rule The forwarding rule self_link.
forwarding_rule_id The forwarding rule id.
ip_address The internal IP assigned to the regional forwarding rule.

Resources created

terraform-google-lb-internal's People

Contributors

9renpoto avatar aaron-lane avatar aditsinha avatar astrozyk avatar betsy-lichtenberg avatar bradfitz avatar cloud-foundation-bot avatar danisla avatar denissimonovski avatar drebes avatar ethicalmohit avatar g-awmalik avatar gadiener avatar generalaardvark avatar imrannayer avatar isaacma4 avatar jb-abbadie avatar morgante avatar omazin avatar qllb avatar release-please[bot] avatar renovate[bot] avatar rhadley-recurly avatar snehamuppala avatar thatcoleyouknow avatar tnguyen14 avatar upodroid avatar willroberts avatar wkrysmann avatar zefdelgadillo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-lb-internal's Issues

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

regex
Makefile
  • cft/developer-tools 1.22
build/int.cloudbuild.yaml
  • cft/developer-tools 1.22
build/lint.cloudbuild.yaml
  • cft/developer-tools 1.22
terraform
examples/minimal/main.tf
  • GoogleCloudPlatform/lb-internal/google ~> 5.0
examples/simple/main.tf
  • terraform-google-modules/lb-internal/google ~> 5.0
  • GoogleCloudPlatform/lb/google ~> 4.0
examples/simple/mig.tf
  • terraform-google-modules/vm/google ~> 11.0
  • terraform-google-modules/vm/google ~> 11.0
  • terraform-google-modules/vm/google ~> 11.0
  • terraform-google-modules/vm/google ~> 11.0
  • terraform-google-modules/vm/google ~> 11.0
  • terraform-google-modules/vm/google ~> 11.0
test/fixtures/minimal/main.tf
test/setup/main.tf
  • terraform-google-modules/project-factory/google ~> 15.0
test/setup/versions.tf
  • google < 6
  • google-beta < 6
  • hashicorp/terraform >= 0.13
versions.tf
  • google >= 4.26, < 6
  • google-beta >= 4.26, < 6
  • hashicorp/terraform >= 1.3

  • Check this box to trigger a request for Renovate to run again on this repository

Add failover_policy block

TL;DR

Support failover_policy block. For drop_traffic_if_unhealthy support, for example.

Terraform Resources

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_region_backend_service

Detailed design

No response

Additional information

Thanks !

Add region in google_compute_network / subnetwork

On the subnetwork datasource, we should add the region parameter: indeed, if it is unset in the provider, the module will fail to run:

Screenshot 2019-11-13 at 18 39 20

However, setting it in the provider is not always the right thing to do (if you work with multiple regions for instance), and it would be nice to be able to pass it explicitely to the datasource.

Can we please add a region variable ?

Add support for internal http loadbalancer

TL;DR

I am trying to create a http internal loadbalancer. But it's creating a TCP internal load balancer even after passing the all the parameter. So could you please help to use this module for both http and tcp internal loadbalancer?.

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

resource "google_compute_forwarding_rule" "default" {
  project               = var.project
  name                  = var.name
  region                = var.region
  network               = data.google_compute_network.network.self_link
  subnetwork            = data.google_compute_subnetwork.network.self_link
  allow_global_access   = var.global_access
  load_balancing_scheme = "INTERNAL"   /// can you make this as a varriable ?  so that we can use "INTERNAL_MANAGED"

  backend_service       = google_compute_region_backend_service.default.self_link
  ip_address            = var.ip_address
  ip_protocol           = var.ip_protocol
  ports                 = var.ports
  all_ports             = var.all_ports
  service_label         = var.service_label
  labels                = var.labels
}

Terraform Version

latest

Additional information

No response

forwarding_rule support argument network_tier

TL;DR

i'm try to create a tcp internal lb, hit error Error "creating ForwardingRule: googleapi: Error 400: STANDARD network tier (the project's default network tier) is not supported: Network tier other than PREMIUM is not supported for loadBalancingScheme=INTERNAL., badRequest"

Expected behavior

support argument network_tier for google_compute_forwarding_rule

Observed behavior

My Project Network Service Tier config is Standard, ip_address is not set. The error pops up when creating resource. Bypassing this issue, I have to define the network_tier to "PREMIUM" explicitly in google_compute_forwarding_rule.

Terraform Configuration

resource "google_compute_forwarding_rule" "default" {
  project               = var.project
  name                  = var.name
  region                = var.region
  network               = data.google_compute_network.network.self_link
  subnetwork            = data.google_compute_subnetwork.network.self_link
  network_tier          = "PREMIUM"
  allow_global_access   = var.global_access
  load_balancing_scheme = "INTERNAL"
  backend_service       = google_compute_region_backend_service.default.self_link
  ip_address            = var.ip_address
  ip_protocol           = var.ip_protocol
  ports                 = var.ports
  all_ports             = var.all_ports
  service_label         = var.service_label
  labels                = var.labels
}


module "test_ilb" {
  source        = "GoogleCloudPlatform/lb-internal/google"
  version       = "~> 5.0"
  project       = var.project_id
  global_access = false
  network       = data.google_compute_network.my-network.name
  subnetwork    = data.google_compute_subnetwork.my-subnetwork.name
  region        = var.region
  name          = local.resource_name
  ports         = ["8080"]
  source_tags   = []
  target_tags   = []
  backends      = []
  health_check  = local.health_check
}

Terraform Version

$ terraform version
Terraform v1.3.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v4.84.0
+ provider registry.terraform.io/hashicorp/google-beta v4.84.0
+ provider registry.terraform.io/hashicorp/random v3.5.1

Additional information

No response

Offer reasonable default values for attributes in var.health_check

TL;DR

var.health_check is defined as an object, but does not use optionals. Therefor, all attribute values must be provided even when the attribute is not relevant, or provider default values are suitable.

Terraform Resources

google_compute_health_check

Detailed design

// variables.tf for child module
variable "health_check" {
  type = object({
    name                = string
    type                = optional(string, "TCP")
    port                = optional(number, 80)
    check_interval_sec  = optional(number, 5)
    healthy_threshold   = optional(number, 2)
    proxy_header        = optional(string, "NONE")
    enable_log          = optional(bool, false)
  })
}

// Example call from parent module
module "gce-ilb" {
  source  = "GoogleCloudPlatform/lb-internal/google"
  version = "~> 2.0"
  name    = "smtp-relays"
  health_check = {
    name                = "smtp-hc"
    port                = 25
    check_interval_sec  = 15
  }
}

Additional information

This would require Terraform 1.3 or higher.

Backend service network not configurable

When I use example/simple and set the network to something other than "default" I get the following error:

Error: Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.backendService': 'https://www.googleapis.com/compute/v1/projects/project/regions/us-west1/backendServices/group-ilb-with-http-hc'. Forwarding rule network projects/project/global/networks/default must be the same as backend service network projects/project/global/networks/lb-network., invalid

When I tried modifying the backend service in main.tf to include the network attribute and the load_balancing_scheme = "INTERNAL", I get this error:

Error: Error updating RegionBackendService "projects/project/regions/us-west1/backendServices/group-ilb-with-http-hc": googleapi: Error 400: Invalid value for field 'resource.network': 'projects/project/global/networks/default'. Network field cannot be modified., invalid

issue iwth health_check and port_name var

When using this module with health_check, we have to pass in the port_name variable in the health_check object but that clashes with the code in main.tf on line 75 and 95.

The issue is resolved if we set "provider = google-beta" for the resources resource "google_compute_health_check" "tcp" & resource "google_compute_health_check" "http"

Error: Unsupported argument
on .terraform/modules/gce-ilb/terraform-google-lb-internal-2.1.0/main.tf line 75, in resource "google_compute_health_check" "tcp":
75: port_name = var.health_check["port_name"]
An argument named "port_name" is not expected here.
Error: Unsupported argument
on .terraform/modules/gce-ilb/terraform-google-lb-internal-2.1.0/main.tf line 95, in resource "google_compute_health_check" "http":
95: port_name = var.health_check["port_name"]
An argument named "port_name" is not expected here.

Release a new version with support for labels

TL;DR

Support for labels was added in #96, but the latest release is v4.6.0 from May 17 2022. Please release a new version with the latest changes included. Thanks!

Terraform Resources

No response

Detailed design

No response

Additional information

No response

Backends not coming up due to various issues with the startup script

About the simple example...

gerritd@ said:

backend service: group-ilb-with-http-hc

  • backend: mig2-mig
    ++ VM: mig2-3v54
  • backend: mig3-mig
    ++ VM: mig3-d6ds

Looking at the startup script, it looks like you're trying to install and configure Apache 2. I connected to both of the VMs and neither had Apache 2 installed:
systemctl status apache2

Unit apache2.service could not be found.

And these VMs both run CentOS, which uses yum instead of apt. So these first two commands in the startup script would fail...
apt-get update
apt-get install -y apache2 libapache2-mod-php

And this ensures that if one command in the script fails, the script just exits:
set -e or bash with the "-e" flag

So we know that we need to modify the startup script so that it works with CentOS and RHEL.

Next, these VMs don't have external IP addresses or internet access via Cloud NAT. Unfortunately, the software repositories to which apt or yum would need to connect are "on the internet." To fix this problem, I created a Cloud NAT gateway (nat-for-us-central1-and-default-network) applicable to VMs lacking external IPs in any subnet range in the us-central1 region of the default network.

I ran these commands (as root) to get Apache 2 installed and running:
yum update -y
yum install -y httpd
systemctl enable httpd
systemctl restart httpd

I did just enough manual work and created a /var/www/html/index.html file with the text "howdy" in it so the HTTP health check would pass (it was returning HTTP 403 with the php file).

We need to get the php file working or to do something which accomplishes the same thing.

unable to create and attach Target Proxies

TL;DR

I have try to create or attach the forwarding rules to targetProxies for my internal load balancer but unfortunately, I am not able to do so as it trying to look in "global" instead of the region. All the resources have been created regionally as I am using regional internal LB. How could we solve this problem?

ERROR:
10:29:21 module.gce-ilb-https.google_compute_forwarding_rule.default: Creating...
10:29:22
10:29:22 Error: Error creating ForwardingRule: googleapi: Error 404: The resource 'projects//global/networks/<vpc_network>' was not found, notFound
10:29:22
10:29:22 with module.gce-ilb-https.google_compute_forwarding_rule.default,

I use the following resource in the code

`

Regional forwarding rule

resource "google_compute_forwarding_rule" "default" {
name = "${var.env}-${var.region}-ilb-forwarding-rule"
region = "${var.region}"
depends_on = [local.proxy_subnet]
ip_protocol = "TCP"
ip_address = local.address
load_balancing_scheme = "INTERNAL_MANAGED"
port_range = "80"
target = google_compute_region_target_http_proxy.default.self_link
network = local.network
subnetwork = local.subnetwork
network_tier = "PREMIUM"
}

Regional HTTP proxy

resource "google_compute_region_target_http_proxy" "default" {

count = local.create_http_forward ? 1 : 0

name = "${var.env}-${var.region}-ilb-target-proxy"
region = "${var.region}"
project = "${var.project_id}"
url_map = local.url_map
}`

NOTE: all other resources have been able to create regional I can create when I manually update LB which created a region target proxy but I try to use the same instead of looking in the region it looks in global.

Expected behavior

No response

Observed behavior

Should pick or create forwarding rules target proxy from the region instead of global when given a region option

Terraform Configuration

`
# Regional forwarding rule
resource "google_compute_forwarding_rule" "default" {
  name                  = "${var.env}-${var.region}-ilb-forwarding-rule"
  region                = "${var.region}"
  depends_on            = [local.proxy_subnet]
  ip_protocol           = "TCP"
  ip_address            = local.address
  load_balancing_scheme = "INTERNAL_MANAGED"
  port_range            = "80"
  target                = google_compute_region_target_http_proxy.default.self_link
  network               = local.network
  subnetwork            = local.subnetwork
  network_tier          = "PREMIUM"
}

# Regional HTTP proxy
resource "google_compute_region_target_http_proxy" "default" {
#  count   = local.create_http_forward ? 1 : 0
  name       = "${var.env}-${var.region}-ilb-target-proxy"
  region     = "${var.region}"
  project    = "${var.project_id}"
  url_map    = local.url_map
}`

Terraform Version

using hashicorp/google v4.31.0

Additional information

No response

data lookup forcing new resource on apply

Using the module as follows:

module "internal-lb" {
  source      = "../gcloud-lb-internal"
  project     = "${var.project}"
  region      = "${var.region}"
  name        = "${var.vpc}-es-lb-internal"
  network     = "${var.vpc}"
  subnetwork  = "${data.google_compute_subnetwork.link.name}"
  ports       = ["9200"]
  health_port = "9200"
  source_tags = "${var.lb_source_tags}"
  target_tags = ["${module.cluster.K8S_TAG}"]

  # Related to number of instances / cluster_zones in module.cluster
  backends = [
    { group = "${module.cluster.K8S_INSTANCE_GROUP_URLS[0]}" },
  ]
}

I have found that because of the data lookup within the module code, for the subnetwork name / self_link, this means that terraform always plans to recreate the forwawrding_rule.

Example (minus all the things that stay the same):

-/+ module.elastic.module.internal-lb.google_compute_forwarding_rule.default (new resource required)
      id:                                  "vpc-du-es-lb-internal" => <computed> (forces new resource)
      network:                             "https://www.googleapis.com/compute/v1/projects/sopost-infra-dev/global/networks/vpc-du" => "${data.google_compute_network.network.self_link}" (forces new resource)
      self_link:                           "https://www.googleapis.com/compute/v1/projects/sopost-infra-dev/regions/us-central1/forwardingRules/vpc-du-es-lb-internal" => <computed>
      service_name:                        "" => <computed>
      subnetwork:                          "https://www.googleapis.com/compute/v1/projects/sopost-infra-dev/regions/us-central1/subnetworks/elastic-sub" => "${data.google_compute_subnetwork.network.self_link}" (forces new resource)

Normally not an issue, but its not the quickest of actions to perform on google cloud, however I can see why you've done it that way (less inputs).

I happen to be wrapping this module and passing the the subnetwork to ensure everything is created in a specific subnetwork, but I get the same result when I hard code the value of subnetwork for the input.

Not sure how this is avoidable currently without taking the data lookups out and passing the value explicitly. Unless I am missing something?

Connection Draining Timeout

Hi I am looking to use this module but noticed that there is no way to set connection_draining_timeout on the google_compute_region_backend_service resource. Can this please be added as a variable and input?

Issue for me

i have cloned the code and ran it , I am getting below error .. suggest me what could be the issue

Error: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/praveen-sahukara/global/healthChecks/group-ilb-hc-http' already exists, alreadyExists

on .terraform/modules/gce-ilb/terraform-google-lb-internal-2.1.0/main.tf line 80, in resource "google_compute_health_check" "http":
80: resource "google_compute_health_check" "http" {

Error: Error creating HttpHealthCheck: googleapi: Error 409: The resource 'projects/praveen-sahukara/global/httpHealthChecks/group1-lb-hc' already exists, alreadyExists

on .terraform/modules/gce-lb-fr/terraform-google-lb-2.2.0/main.tf line 41, in resource "google_compute_http_health_check" "default":
41: resource "google_compute_http_health_check" "default" {

Error: cannot determine self_link for subnetwork "mysubnet-01": Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.

on .terraform/modules/instance_template2/terraform-google-vm-1.4.1/modules/instance_template/main.tf line 57, in resource "google_compute_instance_template" "tpl":
57: resource "google_compute_instance_template" "tpl" {

Error: cannot determine self_link for subnetwork "mysubnet-01": Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.

on .terraform/modules/instance_template3/terraform-google-vm-1.4.1/modules/instance_template/main.tf line 57, in resource "google_compute_instance_template" "tpl":
57: resource "google_compute_instance_template" "tpl" {

Failed Healthcheck

So using the latest version of the module in the following:

// Client internal client load balancer IP address
resource "google_compute_address" "es_client_ilb" {
  name         = "${var.cluster_name}-client-ilb"
  address_type = "INTERNAL"
  subnetwork    = data.google_compute_subnetwork.default.self_link
  project      = var.project
}

// Client internal load balancer
module "es_client_ilb" {
  source     = "GoogleCloudPlatform/lb-internal/google"
  version    = "~> 2.0"
  project    = var.project
  region     = var.region
  name       = "${var.cluster_name}-client-ilb"
  ip_address = google_compute_address.es_client_ilb.address
  ports      = ["9200", "9300"]
  health_check = {
    type                = "http"
    check_interval_sec  = 1
    healthy_threshold   = 4
    timeout_sec         = 1
    unhealthy_threshold = 5
    proxy_header        = "NONE"
    port                = 9200
    port_name           = "health-check-port"
    request_path        = "/"
  }
  source_tags = ["${var.cluster_name}-kibana", "${var.cluster_name}-external"]
  target_tags = ["${var.cluster_name}-client"]
  network     = "default"
  subnetwork  = "default"

  backends = [
    {
      group = module.es_client.instance_group
      description = "elasticsearch-clients"
    },
  ]
}

When I go into the console it's showing that no healthly nodes. My MIG looks like this

module "es_client" {
  source        = "../../../modules/terraform-elasticsearch"
  cluster_name  = var.cluster_name
  name          = "${var.cluster_name}-client"
  region        = var.region
  zones         = var.zones
  num_nodes     = var.client_num_nodes
  machine_type  = var.client_machine_type
  heap_size     = var.client_heap_size
  masters_count = format("%d", floor(var.master_num_nodes / 2 + 1))
  master_node   = false
  data_node     = false
  access_config = []
  network       = data.google_compute_network.default.self_link
  subnetwork    = data.google_compute_subnetwork.default.self_link
  subnetwork_project = var.subnetwork_project_id
  project       = var.project
  node_tags     = [var.cluster_name]
  hostname      = "es-client"

  source_image_family = "debian-9"
  source_image_project =  "debian-cloud"

  node_labels   = {
    environment = "staging"
    department  = "engineering"
    application = "elasticsearch"
    terraform_created = "true"
  }  
  named_ports        = local.named_ports
  service_account = {
    email  = "[email protected]"
    scopes = ["https://www.googleapis.com/auth/cloud-platform"]
  }
}

Which is calling in the terraform-elasticsearch module.

data "template_file" "node-startup-script" {
  template = file("${path.module}/config/user_data.sh")

  vars = {
    project_id             = var.project
    zones                  = join(",", var.zones)
    elasticsearch_data_dir = var.elasticsearch_data_dir
    elasticsearch_logs_dir = var.elasticsearch_logs_dir
    heap_size              = var.heap_size
    cluster_name           = var.cluster_name
    minimum_master_nodes   = var.masters_count
    master                 = var.master_node ? "true" : "false"
    data                   = var.data_node ? "true" : "false"
    ingest                 = var.ingest_node ? "true" : "false"
    http_enabled           = var.http_enabled ? "true" : "false"
    security_enabled       = var.security_enabled ? "true" : "false"
    monitoring_enabled     = var.monitoring_enabled ? "true" : "false"
  }
}


module "instance_template" {
  source                  = "terraform-google-modules/vm/google//modules/instance_template"
  version                 = "1.3.0"

  project_id              = var.project
  machine_type            = var.machine_type
  tags                    = var.node_tags
  labels                  = var.node_labels
  startup_script          = data.template_file.node-startup-script.rendered

  /* network */
  network                 = var.network
  subnetwork              = var.subnetwork
  subnetwork_project      = var.subnetwork_project
  can_ip_forward          = var.can_ip_forward

  /* image */
  source_image            = var.source_image
  source_image_family     = var.source_image_family
  source_image_project    = var.source_image_project

  /* disks */
  disk_size_gb            = var.disk_size_gb
  disk_type               = var.disk_type
  auto_delete             = var.auto_delete
  additional_disks        = var.additional_disks

  service_account         = var.service_account
  
}


module "node" {
  source                  = "terraform-google-modules/vm/google//modules/mig"
  version                 = "1.3.0"
  project_id                = var.project
  network                   = var.network
 /* subnetwork                = var.subnetwork
  subnetwork_project        = var.subnetwork_project */
  hostname                  = var.hostname
  region                    = var.region
  instance_template         = module.instance_template.self_link
  target_size               = var.num_nodes
  target_pools              = var.target_pools
  distribution_policy_zones = var.distribution_policy_zones
  update_policy             = var.update_policy
  named_ports               = var.named_ports
  min_replicas              = var.num_nodes
}


data "google_compute_region_instance_group" "default" {
  self_link = module.node.self_link
}

Cluster firewall rule is as such

// Cluster firewall
resource "google_compute_firewall" "cluster" {
  name    = var.cluster_name
  network = data.google_compute_network.default.self_link
  project = var.project

  allow {
    protocol = "tcp"
    ports    = ["9200", "9300"]
  }

  source_tags = [var.cluster_name, "${var.cluster_name}-external",var.k8s_cluster_tag,"elasticsearch"]
  target_tags = [var.cluster_name, "elasticsearch"]
}

I'm at a lost on why this wouldn't work. I have confirmed that the nodes are recieving traffic if call them by IP address from other nodes in my project.

Firewall rule fails open if no sources specified

The firewall for this module fails open (to range 0.0.0.0/0) if no sources are specified.

This is concerning as it leaves unaware users of this module one step away from opening their load balancer to traffic from anywhere, possibly without realising.

Updating health check type fails

Trying to update health check type (e.g: change the healthcheck type from 'tcp' to 'http') result in following issue:

Error: Error reading HealthCheck: googleapi: Error 400: The health_check resource 'projects/rnm-sharedsvcs-net-squid-9dbc/global/healthChecks/squid-ilb-hc' is already being use
d by 'projects/rnm-sharedsvcs-net-squid-9dbc/regions/europe-west1/backendServices/squid-ilb', resourceInUseByAnotherResource

Error: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/rnm-sharedsvcs-net-squid-9dbc/global/healthChecks/squid-ilb-hc' already exists, alreadyExists

Create Private Application Load Balancer (HTTP/S) using SSL certificate

TL;DR

I am trying to create an internal https load balancer. But we don't have the possibility to use an ssl certificate.

Terraform Resources

- https://cloud.google.com/load-balancing/docs/l7-internal#ssl_certificates

Detailed design

No response

Additional information

Give the option to use a certificate already created or create by terraform.

Firewall rules do not allow specifying protocol "all"

default-ilb-fw cannot create a firewall with protocol "all". This is a requirement when the ILB backends are acting as ILB-as-next-hop for network appliances. Alternatively, the creation of the default-ilb-fw rule should be optional, so the firewall gets created by other means.

Update Getting Started

Usage section is not up to date anymore.

module "gce-ilb" {
  source       = "terraform-google-modules/lb-internal/google"
 ... 
}

It gives the following error:

Error: Module not found

Module "gce-ilb" (from main.tf:106) cannot be found in the module registry at
registry.terraform.io.

Should be:

module "gce-ilb" {
  source       = "GoogleCloudPlatform/lb-internal/google"
 ... 
}

Missing parameters for HTTP health checks + new release?

Hello,

The module does not provide all parameters for health checking (https://www.terraform.io/docs/providers/google/r/compute_http_health_check.html), especially the following ones (available from GCP web UI) :

  • healthy_threshold
  • unhealthy_threshold
  • check_interval_sec
  • timeout_sec
  • request_path (PR #2 add support for this but is not merged yet)

Will those parameters be added in future releases? It will be really helpful, since the default parameters are not always appropriate (sometimes a more aggressive health checking is required).

Moreover, is there a release planned (1.0.5) with HTTP health check support (16f3c18)? Even without the parameters specified above, it would be great because HTTP health checking is most of time used instead of TCP.

Thank you very much.

[Ask] load_balancing_scheme INTERNAL is hardcoded

[Ask] load_balancing_scheme INTERNAL is hardcoded

is there any module to create internal_managed lb? we face issue
1 MIG with 2 different LB

  • 1 LB is http external
  • 1 LB is internal

its all failed due to different balancing mode
http external is using utilization and internal lb just accept connection
could we somehow make load_balancing_scheme in the var

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.