Giter Club home page Giter Club logo

terraform-aws-ecs-alb-service-task's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-ecs-alb-service-task's Issues

Terraform Cloud - module ecs_alb_service_task has a broken symlink

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

A clear and concise description of what the bug is.

When running in Terraform cloud the below error is a result:
failed pulling filesystem: ssh: failed parsing message: failed scanning message: expected integer�

Running it locally on my laptop it works fine, only issue is VCS --> Terraform Cloud execution.

I was able to find that the module ecs_alb_service_task has a broken symlink:

vagrant@ubuntu:~/ticket58305/service-7$ find . -type l -exec ls -la {} ;
lrwxrwxrwx 1 vagrant vagrant 34 Oct 20 23:20 ./.terraform/modules/ecs_alb_service_task/terraform-aws-ecs-alb-service-task -> terraform-aws-ecs-alb-service-task

Expected Behavior

A clear and concise description of what you expected to happen.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to https://app.terraform.io/
  2. Run terraform-aws-ecs-alb-service-task module with Remote build
  3. Enter '....'
  4. See error: failed pulling filesystem: ssh: failed parsing message: failed scanning message: expected integer�

Screenshots

If applicable, add screenshots or logs to help explain your problem.
See Attached log file.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Terraform v1.0.9--> linux_amd64
  • Version: "cloudposse/ecs-alb-service-task/aws" --> "0.58.0"

Additional Context

Add any other context about the problem here
run-kreVEvVqQPJMawqm-plan-log.txt
.

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

InvalidParameterException: The new ARN and resource ID format must be enabled to propagate tags. Opt in to the new format and try again.

I am getting this error with latest Terraform 0.12:

module.ecs_alb_service_task.aws_security_group_rule.allow_all_egress[0]: Creation complete after 8s [id=sgrule-3293011397]

Error: InvalidParameterException: The new ARN and resource ID format must be enabled to propagate tags. Opt in to the new format and try again.
        status code: 400, request id: ea7d2db5-1a84-4932-a4ad-89f5a826085b "eg-test-ecs-alb-service-task"

  on ..\..\..\modules\services\aws_ecs_alb_service_task\main.tf line 235, in resource "aws_ecs_service" "ignore_changes_task_definition":
 235: resource "aws_ecs_service" "ignore_changes_task_definition" {


Variable ecs_load_balancers definition

Hello. First of all, thank you much for the module!

Not sure if this is a bug, but trying to attach Target Group via ecs_load_balancers block raised the error, even with both possible definitions, see below:

Terraform Version

Terraform v0.12.12

First try:

module "ecs_alb_service_task" {
//...
  ecs_load_balancers          = [
    {
      target_group_arn  = join("", aws_lb_target_group.module_tg.*.arn)
      container_name    = var.container_name
      container_port    = var.container_port
    }
  ]
//...
}

Crash Output

Error: Invalid value for module argument
...
The given value is not suitable for child module variable "ecs_load_balancers"
...
element 0: attribute "elb_name" is required.

Well...

Second try - add elb_name:

module "ecs_alb_service_task" {
//...
  ecs_load_balancers          = [
    {
      elb_name = "some_value"
      target_group_arn  = join("", aws_lb_target_group.module_tg.*.arn)
      container_name    = var.container_name
      container_port    = var.container_port
    }
  ]
//...
}

Crash Output

Error: InvalidParameterException: loadBalancerName and targetGroupArn cannot both be specified. You must specify either a loadBalancerName or a targetGroupArn.
	status code: 400, request id: ...
... in resource "aws_ecs_service" "default":

As a workaround cloned a module and removed elb_name from variables.tf. Since the module called ...alb-service-task think that ELB isn't so important :)

authorization_config in efs_volume_configuration will never be referenced

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

authorization_config in efs_volume_configuration references the outer volume.value object rather than efs_volume_configuration.value

Expected Behavior

authorization_config should be referenced correctly

Steps to Reproduce

Create a Fargate task with EFS volume, access point, and IAM auth

Screenshots

N/A

Environment (please complete the following information):

N/A

Additional Context

variable "volumes" {
  type = list(object({
    host_path = string
    name      = string
    docker_volume_configuration = list(object({
      autoprovision = bool
      driver        = string
      driver_opts   = map(string)
      labels        = map(string)
      scope         = string
    }))
    efs_volume_configuration = list(object({
      file_system_id          = string
      root_directory          = string
      transit_encryption      = string
      transit_encryption_port = string
      authorization_config = list(object({
        access_point_id = string
        iam             = string
      }))
    }))
  }))
  description = "Task volume definitions as list of configuration objects"
  default     = []
}

authorization_config is a member of efs_volume_configuration which is a member of the list of volumes. However the module is looking for authorization_config within the parent volumes object:

      dynamic "efs_volume_configuration" {
        for_each = lookup(volume.value, "efs_volume_configuration", [])
        content {
          file_system_id          = lookup(efs_volume_configuration.value, "file_system_id", null)
          root_directory          = lookup(efs_volume_configuration.value, "root_directory", null)
          transit_encryption      = lookup(efs_volume_configuration.value, "transit_encryption", null)
          transit_encryption_port = lookup(efs_volume_configuration.value, "transit_encryption_port", null)
          dynamic "authorization_config" {
            for_each = lookup(volume.value, "authorization_config", [])
            content {
              access_point_id = lookup(authorization_config.value, "access_point_id", null)
              iam             = lookup(authorization_config.value, "iam", null)
            }
          }
        }
      }

Specifically: for_each = lookup(volume.value, "authorization_config", [])
Should be: for_each = lookup(efs_volume_configuration.value, "authorization_config", [])

Invalid Network Configuration for "Bridge" network type

I'm getting the following error when trying to setup a service task with network_mode = "bridge":

aws_ecs_service.default: InvalidParameterException: Network Configuration is not valid for the given networkMode of this task definition.

Per the terraform aws_ecs_service docs it looks like you can't specify a network_configuration block for ecs_services unless you are using the service in awsvpc networking mode.

I'm going to fork and modify to fit our needs, but would be great to get an update/example of how to use this with a bridge network configuration.

failed creating ECS Task Definition with bind volume

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Error: failed creating ECS Task Definition: ClientException: Fargate compatible task definitions do not support sourcePath
I do not have sourcePath value in my code. If

Expected Behavior

ECS service with bind volumes

Steps to Reproduce

module "container_definition_mysql" {
  source          = "registry.terraform.io/cloudposse/ecs-container-definition/aws"
  version         = "0.58.1"
  container_name  = "mysql"
....


module "ecs_core_service" {
  source             = "registry.terraform.io/cloudposse/ecs-alb-service-task/aws"
  version            = "0.66.2"
  name               = "taskname"
  container_definition_json =  module.container_definition_mysql.json_map_object
  runtime_platform = [{
    operating_system_family = "LINUX"
    cpu_architecture        = "X86_64"
  }]
  network_mode        = "awsvpc"
  launch_type         = "FARGATE"
  scheduling_strategy = "REPLICA"
  bind_mount_volumes = [{
    name      = "service-storage"
    host_path = "/var/logs"
  }]

Error

│ Error: failed creating ECS Task Definition (taskname): ClientException: Fargate compatible task definitions do not support sourcePath
│ 
│   with module.core-application.module.ecs_core_service.aws_ecs_task_definition.default[0],
│   on .terraform/modules/core-application.ecs_core_service/main.tf line 40, in resource "aws_ecs_task_definition" "default":
│   40: resource "aws_ecs_task_definition" "default" {
│ 

versions

aws provider 4.33
terraform 1.3.1
ecs fargate cluster 1.4.0

make aws_ecs_service `name` configurable

Describe the Feature

It would be a benefit if you cloud change the label_order for all aws_ecs_service resources.

Expected Behavior

ECS service name should be independent.

Use Case

If you parse a lot of context down to your module it will bloat your service name very quickly and make it unfriendly to read (especially inside the AWS console). Modifying the whole context isn't a solution because it would change the other resource names and tags as well which is not ideal or even not possible if you have for example multiple environments with the same service name.

Describe Ideal Solution

I would like to have a variable that allows me to modify only the name of the ecs service name.

Alternatives Considered

Alternatives to #183 could be to just introduce a variable only for the aws_ecs_service name.

Additional Context

Scenario: You have one AWS account for your SDLC(software development life cycle) in which you have an ECS cluster for each stage or environment. Your cluster name would be something like namespace-environment or namespace-environment-stage.

Current implementation

Input:

  namespace   = "test"
  environment = "sandbox1"
  stage       = "stage1"
  name        = "myservice1"
  attributes  = ["consumer"]

Result:

name = test-sandbox1-stage1-myservice1-consumer

In that case, you will have a lot of bloated prefixes that don't provide any value.

Recommended implementation

Input:

  namespace   = "test"
  environment = "sandbox1"
  stage       = "stage1"
  name        = "myservice1"
  attributes  = ["consumer"]
  ecs_service_label_order = ["name", "attributes"]

Result:

name = myservice1-consumer

In that case, you would modify the id without losing out on the tags but have a much more useful and easy-to-view name

Important
You don't want to modify the inputs in general because you will need them for other resources like aws_iam_role with there full name (id) on a multi environment or stage account.

Container definition rebuild on every run

using this module as part of https://github.com/cloudposse/terraform-aws-ecs-web-app

Working on the web app module, and increased the version to 0.6.0

module "ecs_alb_service_task" {
  source                    = "git::https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=tags/0.6.0"
  name                      = "${var.name}"
  namespace                 = "${var.namespace}"
  stage                     = "${var.stage}"
  alb_target_group_arn      = "${module.alb_ingress.target_group_arn}"
  container_definition_json = "${module.container_definition.json}"
  container_name            = "${module.default_label.id}"
  desired_count             = "${var.desired_count}"
  task_cpu                  = "${var.container_cpu}"
  task_memory               = "${var.container_memory}"
  ecr_repository_name       = "${module.ecr.repository_name}"
  ecs_cluster_arn           = "${var.ecs_cluster_arn}"
  launch_type               = "${var.launch_type}"
  vpc_id                    = "${var.vpc_id}"
  security_group_ids        = ["${var.ecs_security_group_ids}"]
  private_subnet_ids        = ["${var.ecs_private_subnet_ids}"]
  container_port            = "${var.container_port}"
}
module "hw_pipeline" {
    source                                          = "../../../terraform-aws-ecs-web-app"
    namespace                                       = "${var.namespace}"
    stage                                           = "${var.stage}"
    name                                            = "hw"
    listener_arns                                   = ["${module.hw_alb.https_listener_arn}", "${module.hw_alb.http_listener_arn}"]
    listener_arns_count                             = "2"
    aws_logs_region                                 = "us-west-2"

    vpc_id                                          = "${module.vpc.vpc_id}"
    codepipeline_enabled                            = "true"

    ecs_cluster_arn                                 = "${aws_ecs_cluster.ecs_cluster.arn}"
    ecs_cluster_name                                = "${aws_ecs_cluster.ecs_cluster.name}"
    ecs_private_subnet_ids                          = ["${module.app_subnets.az_subnet_ids["us-west-2a"]}", "${module.app_subnets.az_subnet_ids["us-west-2b"]}", "${module.app_subnets.az_subnet_ids["us-west-2c"]}"]

    ecs_security_group_ids                          = ["${aws_security_group.app_traffic.id}"]
    container_cpu                                   = "512"
    container_memory                                = "1024"
    container_image                                 = "xxxxxxx.dkr.ecr.us-west-2.amazonaws.com/bv-staging-hw-ecr:latest"
    container_port                                  = "4000"
    port_mappings                                   = [
      {
        containerPort                               = "4000"
        protocol                                    = "tcp"
      }
    ]

    desired_count                                   = "1"
    alb_name                                        = "${module.hw_alb.alb_name}"
    alb_arn_suffix                                  = "${module.hw_alb.alb_arn_suffix}"
    alb_ingress_healthcheck_path                    = "/"
    alb_ingress_paths                               = ["/*"]
    alb_ingress_listener_priority                   = "100"
    github_oauth_token                              = "${data.aws_ssm_parameter.github_oauth_token.value}"
    repo_owner                                      = "xxxxxxxxx"
    repo_name                                       = "hello_world"
    branch                                          = "master"
    ecs_alarms_enabled                              = "true"
    alb_target_group_alarms_enabled                 = "true"
    alb_target_group_alarms_3xx_threshold           = "25"
    alb_target_group_alarms_4xx_threshold           = "25"
    alb_target_group_alarms_5xx_threshold           = "25"
    alb_target_group_alarms_response_time_threshold = "0.5"
    alb_target_group_alarms_period                  = "300"
    alb_target_group_alarms_evaluation_periods      = "1"

    environment = [
      {
        name = "COOKIE"
        value = "cJzXwLAT8dwD9SSgBITcRI1ib4ejNts4bgagcfhv"
      },
      {
        name = "PORT"
        value = "80"
      },
      {
        name = "DATABASE_URL"
        value ="postgres://${var.db_user}:${data.aws_ssm_parameter.db_password.value}@${module.rds_instance.instance_endpoint}/apidb"
      }
    ]
}

ecs_load_balancers optional attributes

Describe the Bug

I am trying to provision an ECS service + ECS task using an ALB. To configure the options for load balancer, I am using this configuration:

locals {
  alb = [{
    container_name   = coalesce(var.alb_container_name, module.this.id)
    container_port   = var.container_port
    elb_name         = null
    target_group_arn = var.target_group_arn
  }]
}

Such local configuration goes directly to the module input like so:

module "ecs_alb_service_task" {
  source  = "cloudposse/ecs-alb-service-task/aws"
  version = "0.64.1"

  ...
  ecs_load_balancers                 = local.alb
...

  context = module.this.context
}

Both validation and plan runs as expected, but during the apply operation a TF error is returned:

module.ecs_alb_service_task.aws_ecs_service.ignore_changes_task_definition[0]: Creating...
╷
│ Error: error creating ECS service (mcoins-sandbox-analytics-denormalizer): InvalidParameterException: Load Balancer Name can not be blank.
│
│   with module.ecs_alb_service_task.aws_ecs_service.ignore_changes_task_definition[0],
│   on .terraform/modules/ecs_alb_service_task/main.tf line 351, in resource "aws_ecs_service" "ignore_changes_task_definition":
│  351: resource "aws_ecs_service" "ignore_changes_task_definition" {
│
╵

Which means I'm sending an empty string to AWS API and causing the issue. This led me to try getting rid of the elb_name value, and getting this other error:

│ Error: Invalid value for input variable
│
│   on main.tf line 110, in module "ecs_alb_service_task":
│  110:   ecs_load_balancers                 = local.alb
│
│ The given value is not suitable for
│ module.ecs_alb_service_task.var.ecs_load_balancers declared at
│ .terraform/modules/ecs_alb_service_task/variables.tf:11,1-30: element 0:
│ attribute "elb_name" is required.

I've looked into AWS API documentation and Terraform Provider and make sure that only container_port and container_name are required arguments so you can choose between classic ELB or ALB/NLB.

I suggest the option here is to add the optional keyword to both elb_name and target_group_arn arguments, so whenever values are supplied here it's not mandatory to supply both. I can also supply a PR with the suggested change if you're ok with it.

Expected Behavior

Terraform should be able to apply with either Classic or ALB/NLB values, but not require both.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Run an example that supplies ecs_load_balancers value with either classic ELB name or ALB target group ARN.
  2. Run terraform init && terraform apply

Screenshots

Code snippets and errors above

Environment (please complete the following information):

  • OS: OSX, arm64
  • Version: Monterrey 12.4
  • Terraform version:
Terraform v1.2.4
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.22.0
+ provider registry.terraform.io/hashicorp/local v2.2.3
+ provider registry.terraform.io/hashicorp/null v3.1.1

Additional Context

None.

Missing support for bind_mount style service volumes

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

There is support for docker_volumes, fsx_volumes, and efs_volumes, but there is also a 4th default type called bind mount in ECS.

Describe Ideal Solution

Adding a variable called bind_mount_volumes, having it just be a list of objects with host_path and name, and concatting it with the volumes local variable should enable this functionality.

Additional Context

image

Allow reusing an ecs service IAM role

Have a question? Please checkout our Slack Community in the #geodesic channel or visit our Slack Archive.

Slack Community

Describe the Feature

I'd like to reuse an ecs service role instead of creating a new one

Expected Behavior

Reuse an ecs service role instead of creating a new one

Use Case

I have an ecs service role that can be reused so I'd like to use it instead of creating a new one

Describe Ideal Solution

Perhaps var.service_role_arn

Alternatives Considered

Doing nothing

Additional Context

N/A

Allow reusing a task definition

Have a question? Please checkout our Slack Community in the #geodesic channel or visit our Slack Archive.

Slack Community

Describe the Feature

It would be nice to pass a task definition in so we can reuse task definitions.

Expected Behavior

Pass a task definition arn in

Use Case

I'd like to create multiple tasks that use the same task definition but deployed on different clusters.

Describe Ideal Solution

Perhaps var.task_definition_arn

Alternatives Considered

Duplicating resources outside the module

Additional Context

N/A

Creation of ECS service failed with enabled service discovery

Describe the Bug

Can't deploy ECS service with enabled service discovery

Expected Behavior

ECS service with enabled service discovery

Steps to Reproduce

Module are used in this way (ECS + ALB + EFS + Service Discovery)

module "service" {
  source = "git::https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=0.64.1"

  name                      = var.name
  environment               = var.environment
  container_definition_json = local.containers
  desired_count             = var.desired_count
  ecs_cluster_arn           = var.ecs_cluster_arn
  efs_volumes               = local.jenkins_volume
  ecs_load_balancers = [
    {
      container_name   = var.name
      container_port   = var.container_port
      elb_name         = ""
      target_group_arn = aws_lb_target_group.this.arn
    }
  ]
  launch_type                    = "EC2"
  network_mode                   = "bridge"
  subnet_ids                     = var.subnet_ids
  tags                           = var.common_tags
  task_cpu                       = null
  task_memory                    = null
  vpc_id                         = var.vpc_id
  ignore_changes_task_definition = false
  use_old_arn                    = false
  propagate_tags                 = "TASK_DEFINITION"
  service_registries              = [{
      registry_arn = aws_service_discovery_service.this.arn
      port         = 8080
      container_name = var.name
      container_port = 8080
    }]
}

Now ECS service won't deploy cause of this error:

module.service.aws_ecs_service.default[0]: Creating...
Error: failed creating ECS service (jenkins): InvalidParameterException: You cannot specify an IAM role for services that require a service linked role.
  on .terraform/modules/service/[main.tf](http://main.tf/) line 631, in resource "aws_ecs_service" "default":
 631: resource "aws_ecs_service" "default" {
Releasing state lock. This may take a few moments...
[terragrunt] 2022/09/30 10:54:48 Hit multiple errors:
exit status 1

When I am deleting service_registries, everything works fine.
Also switching network_mode to AWSVPC solves this problem too (due to lack of need for IAM policy in this case)

Environment:

  • Terraform = 0.14.11
  • Terragrunt = 0.25.3
  • AWS Provider = 3.75.2

Error when trying to use EFS volumes in task/container definition

Describe the Bug

I'm trying to use an EFS volume in an ECS service definition. The volumes variable is defined such that one has to supply a value for both the efs_volume_configuration and docker_volume_configuration parameters. This seems to be a Terraform syntax limitation having to do with a lack of optional arguments. However, the solution of passing an empty list doesn't work in this case, yielding the following error:

ClientException: When the volume parameter is specified, only one volume configuration type should be used.

Passing null for docker_volume_configuration doesn't work, either:

Error: Invalid dynamic for_each value

  on .terraform/modules/ecs-service/main.tf line 70, in resource "aws_ecs_task_definition" "default":
  70:         for_each = lookup(volume.value, "docker_volume_configuration", [])
    |----------------
    | volume.value is object with 4 attributes

Cannot use a null value in for_each.

Expected Behavior

To be able to use EFS volumes in an ECS service definition.

Steps to Reproduce

Update the example in examples/complete with the following added to main.tf:

  volumes = [{
    name = "html"
    host_path = "/usr/share/nginx/html"
    # docker_volume_configuration = null
    docker_volume_configuration = []
    efs_volume_configuration = [{
      file_system_id = "fs-8de214f2"
      root_directory          = "/home/user/www"
      transit_encryption      = "ENABLED"
      transit_encryption_port = 2999
      authorization_config = []
    }]
  }]

Environment (please complete the following information):

Terraform v0.14.10, MacOS 10.15.7

Allow using EC2 instance role

Describe the Feature

Task execution role should be optional

Expected Behavior

Task execution role can be disabled

Use Case

Usage of task execution role replaces EC2 instance role.
We would like to continue to be able to use instance roles

Only create aws_security_group if var.network_mode = "awsvpc"

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Looks like the security group is used only if var.network_mode = "awsvpc"

aws_ecs_service.ignore_changes_task_definition

dynamic "network_configuration" {
for_each = var.network_mode == "awsvpc" ? ["true"] : []
content {
security_groups = compact(concat(var.security_group_ids, aws_security_group.ecs_service.*.id))
subnets = var.subnet_ids
assign_public_ip = var.assign_public_ip
}
}

aws_ecs_service.default

dynamic "network_configuration" {
for_each = var.network_mode == "awsvpc" ? ["true"] : []
content {
security_groups = compact(concat(var.security_group_ids, aws_security_group.ecs_service.*.id))
subnets = var.subnet_ids
assign_public_ip = var.assign_public_ip
}
}

But currently the security group and its rules will be created regardless of the var.network_mode.

Expected Behavior

It should not create it unless var.network_mode = "awsvpc"

Steps to Reproduce

N/A

Additional Context

N/A

Unable to use module with bridge network_mode and without load balancers.

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Looks like it is not possible to create service without defining ecs_load_balancers variable and use "bridge" network_mode.
We receive a error:
module.ecs_alb_service_task.aws_ecs_service.default[0]: Creating...

│ Error: error creating filebeat service: error waiting for ECS service (filebeat) creation: InvalidParameterException: IAM roles are only valid for services configured to use load balancers.

Look like local variable "enable_ecs_service_role" has wrong conditions.
enable_ecs_service_role = module.this.enabled && var.network_mode != "awsvpc" && length(var.ecs_load_balancers) <= 1

It should be: length(var.ecs_load_balancers) >= 1

Expected Behavior

Able to create service in Bridge mode without load balancers.

Environment (please complete the following information):

Terraform v1.0.5
on linux_amd64

  • provider registry.terraform.io/hashicorp/aws v3.61.0
  • provider registry.terraform.io/hashicorp/local v2.1.0
  • provider registry.terraform.io/hashicorp/null v3.1.0
  • provider registry.terraform.io/hashicorp/template v2.2.0

Additional Context

Add any other context about the problem here.

Use name_prefix and create_before_destroy on security groups

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

It can be a pain when the security group name changes as it would not be able to destroy - potentially using this pattern would work - https://github.com/terraform-aws-modules/terraform-aws-security-group/blob/master/main.tf#L34

Expected Behavior

Able to create new security group and assign it prior to destroy

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Enter '....'
  4. See error

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: [e.g. Linux, OSX, WSL, etc]
  • Version [e.g. 10.15]

Additional Context

Add any other context about the problem here.

Task role gets deleted before tasks fully deleted, leaving them stuck in "deprovisioning state"

Describe the Bug

When a service is replaced (maybe deleted, too), the task role is deleted before the tasks fully exit. This leaves them stuck in a "deprovisioning" state in the ECS console.

To have the tasks exit completely and leave this state, one has to recreate the ECS service role that was deleted.

Expected Behavior

Roles should only be deleted after the task/service has been fully removed

Screenshots

image

Add property to alter naming scheme for iam resources

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

I would like to be able to alter the label_order property of the labels used for iam resources.
That is to keep the names shorter.
While you can have a name generated like tenant-namespace-environment-stage-name-attributes, it may actually not be desirable when you split your environments e.g. by teams, specific workloads or other criteria.
The way i would like to do the naming is something like:
Cluster = name
Service = name
IAM Roles = environment-name-attributes

Expected Behavior

I would like to be able to specify either an iam_role_label_order or iam_task_role_label_order and iam_execution_role_label_order variable to satisfy my needs.

Use Case

See Describe the feature

Describe Ideal Solution

See Expected Behavior

Alternatives Considered

Using a different value for the name variable, but that would be ugly.

Additional Context

I'd be happy to implement this, should be easy and non-breaking.

Add example to add iam policies to task role

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When using secrets list in a container definition the container gets an permissions error trying to access SecretsManager when the task is instantiated

Expected Behavior

The container should be able to retrieve values from SecretsManager

Steps to Reproduce

Create a container definition with a valueFrom using an ARN in secretsmanager. When the task is run (in Fargate) by the service it will fail with a permissions error on the _exec role.

Screenshots

N/A

Environment (please complete the following information):

N/A

Additional Context

N/A

Support wait_for_steady_state

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Support wait_for_steady_state in the ECS service: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#wait_for_steady_state

Expected Behavior

Setting this to true we will make terraform wait until the new containers are healthy in the service before continuing, or throw an error if the deployment fails.

Use Case

Throwing an error if the deployment is unsuccessful for fast feedback.

Describe Ideal Solution

A new variable wait_for_steady_state that directly maps to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#wait_for_steady_state

Alternatives Considered

You can use the aws CLI to do this, but it makes sense to have this dealt with by terraform.

Additional Context

Add any other context or screenshots about the feature request here.

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

Changing `task_exec_policy_arns` or `task_policy_arns` cause recreations

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The resource aws_iam_role_policy_attachment uses count logic instead of a for_each which causes deletions and creations of attachments when adding or removing policies.

resource "aws_iam_role_policy_attachment" "ecs_exec" {
count = local.create_exec_role ? length(var.task_exec_policy_arns) : 0
policy_arn = var.task_exec_policy_arns[count.index]
role = join("", aws_iam_role.ecs_exec.*.id)
}

resource "aws_iam_role_policy_attachment" "ecs_task" {
count = local.create_task_role ? length(var.task_policy_arns) : 0
policy_arn = var.task_policy_arns[count.index]
role = join("", aws_iam_role.ecs_task.*.id)
}

If a for_each is used instead, this will prevent deleting and recreating all the attachments.

Need ability to customize ecs_execution_role with additional permissions

The task execution role may need additional customization to allow plans with additional security needs.

I achieved this (though I'm not sure this is the best approach) by doing:

In my main.tf calling terraform-aws-ecs-alb-service-task:

data "aws_iam_policy_document" "ecs_allow_ssm_access" {
  statement {
    sid       = "2"
    effect    = "Allow"
    resources = ["arn:aws:ssm:${var.aws_region}:${var.aws_account_id}:parameter/${var.stage}*"]
    actions   = [
      "ssm:List*",
      "ssm:Get*",
      "ssm:Describe*"
    ]
  }
}


module "ecs_alb_service_task" {
  ....
  custom_policy_document    = "${data.aws_iam_policy_document.ecs_allow_ssm_access.json}"`
}

And in main.tf inside this module:

data "aws_iam_policy_document" "ecs_execution_role" {
  source_json = "${var.custom_policy_document}"

  statement {
    sid       = "vLMpjauEwiCGsAJ9tJKsbSgn"
    effect    = "Allow"
    resources = ["*"]

    actions = [
      "ecr:GetAuthorizationToken",
      "ecr:BatchCheckLayerAvailability",
      "ecr:GetDownloadUrlForLayer",
      "ecr:BatchGetImage",
      "logs:CreateLogStream",
      "logs:PutLogEvents",
    ]
  }
}

Unable to use source_security_group_id with security_group_rules

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The objects in security_group_rules must match one another perfectly or else Terraform complains: hashicorp/terraform#26265

Expected Behavior

Able to actually create security group rules.

Steps to Reproduce

Use anything but cidr_blocks in security_group_rules:

  security_group_rules = [
    {
      type                     = "egress"
      from_port                = 0
      to_port                  = 0
      protocol                 = -1
      cidr_blocks              = ["0.0.0.0/0"]
      description              = "Allow all outbound traffic"
    },
    {
      type                     = "ingress"
      from_port                = local.container_port
      to_port                  = local.container_port
      protocol                 = "tcp"
      source_security_group_id = local.alb_security_group
      description              = "Enables incoming ALB traffic."
    }
  ]

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Terraform v1.0.1
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.51.0
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/github v3.0.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Additional Context

Add any other context about the problem here.

Disable icmp sg rule by default

It doesn't make sense to me why icmp is enabled by default. It should be enabled if it's needed.

I propose to set var.enable_icmp_rule to false by default unless explicitly needed.

#106

external task_role and task_exec_role must exist before first run.

Describe the Bug

When specifing either task_role or task_exec_role terraform fails with the infamous "count" error:

The "count" value depends on resource attributes that cannot be determined
until apply...

This is due to:

count = local.enabled && length(var.task_role_arn) == 0 ? 1 : 0

you can work around it by using -target ofc.

Expected Behavior

working without using -target.

How about solving this using a different variable such as:

use_external_task_role: bool
use_external_task_exec_role: bool 

To determine the creation of task_role / task_exec_role.
I assume this would fix it.

Policy only handles ELB permissions, not ALB permissions.

The current policy only adds permissions for adding and removing tasks from a classic elb.
However this should handle target group permissions as well, and have more specific permissions.

The code below resolves the policy to match the load balancer type.

data "aws_iam_policy_document" "ecs_service_policy" {

Update the policy to be:

locals {
  elbs    = [for elb in var.ecs_load_balancers : lookup(elb, "elb_name", null) if lookup(elb, "elb_name", null) != null]
  targetg = [for tg in var.ecs_load_balancers : lookup(tg, "target_group_arn", null) if lookup(tg, "target_group_arn", null) != null]
}

data "aws_iam_policy_document" "ecs_service_policy" {
  count = var.enabled && var.network_mode != "awsvpc" ? 1 : 0

  dynamic "statement" {
    iterator = el
    for_each = local.elbs
    content {
      effect    = "Allow"
      resources = ["arn:aws:elasticloadbalancing:*:*:loadbalancer/${local.elbs[el.key]}"]
      sid       = "elbResources"

      actions = [
        "elasticloadbalancing:Describe*",
        "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
        "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
      ]
    }

  }

  dynamic "statement" {
    iterator = tg
    for_each = local.targetg
    content {
      effect    = "Allow"
      resources = ["${local.targetg[tg.key]}"]
      sid       = "elbv2Resources"

      actions = [
        "elasticloadbalancing:RegisterTargets",
        "elasticloadbalancing:DeregisterTargets",
      ]
    }
  }
  statement {
    effect    = "Allow"
    resources = ["*"]
    sid       = "ec2Resources"

    actions = [
      "ec2:Describe*",
      "ec2:AuthorizeSecurityGroupIngress"
    ]
  }
}

Defining Multiple Host Volumes Paths

Terraform v 0.12.7

I am getting the following issue when trying to define multiple volumes for one service.

Error: Unsupported argument

on .terraform/modules/batch_environment-dev.workers.airflow-webserver.alb_service_task/main.tf line 51, in resource "aws_ecs_task_definition" "default":
51: volume = "${var.volumes}"

An argument named "volume" is not expected here. Did you mean to define a
block of type "volume"?


`  volumes = [ {
    name = "content1"
    host_path = "mnt/folder1"

  },
    {
      name = "content2"
      host_path = "mnt/folder2"
    }
  ]

module "alb_service_task" {

  source = "git::https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=master"
  namespace = "namespace"
  stage = var.environment_name
  name = local.service_name
  alb_security_group = var.alb_security_group_arn
  container_definition_json = module.container_definition.json
  ecs_cluster_arn = var.ecs_cluster_id
  launch_type = var.launch_type
  vpc_id = var.vpc_id
  security_group_ids = var.security_group_ids
  subnet_ids = var.subnet_ids
  volumes = var.volumes

}

The problem I see is coming from here

`
resource "aws_ecs_task_definition" "default" {
  family                   = "${module.default_label.id}"
  container_definitions    = "${var.container_definition_json}"
  requires_compatibilities = ["${var.launch_type}"]
  network_mode             = "${var.network_mode}"
  cpu                      = "${var.task_cpu}"
  memory                   = "${var.task_memory}"
  execution_role_arn       = "${aws_iam_role.ecs_exec.arn}"
  task_role_arn            = "${aws_iam_role.ecs_task.arn}"
  tags                     = "${module.default_label.tags}"
  volume                   = "${var.volumes}"
}`

In the terraform documentation for the module it only ever seems to show it with volumes individually defined as blocks and the concept of multiple volumes being joined doesn't seem to be mentioned. My use case is an airflow cluster where all nodes have access to a shared set of code and then seperate volumes for each engineering domain where they define their individual code bases.

I was actually having a go at using this module in addition to container definitions as I was wondering if you guys might have solved the issue I've with defining task/service volumes that the terraform api doesn't seem to play nicely with passing in a list of volumes maps rather than defining the volumes seperately. Is there something I'm missing or any ideas for workarounds to this?

Question: How To Integrate with ALB

The name here of plugin suggest application load balancer, but this module does not create it its just used to reference existing balancer to be used with services/tasks, right?

So I should first create balancer via other module?

Thanks.

Ladislav

custom execution role cannot be set on module

Describe the Bug

Dear Team,

When I define a custom task execution role. Terraform returns the error below

Logs

Releasing state lock. This may take a few moments...

╷
│ Error: Invalid count argument
│ 
│   on .terraform/modules/ecs_alb_service_task/main.tf line 225, in data "aws_iam_policy_document" "ecs_task_exec":
│  225:   count = local.create_exec_role ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
╵
╷
│ Error: Invalid count argument
│ 
│   on .terraform/modules/ecs_alb_service_task/main.tf line 246, in data "aws_iam_policy_document" "ecs_exec":
│  246:   count = local.create_exec_role ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.

module example to reproduce the problem.

module "ecs_alb_service_task" {
source = "git::https://github.com/cloudposse/terraform-aws-ecs-alb-service-task.git?ref=tags/0.64.0"

enabled = true
environment = var.ecs_environment
namespace = var.namespace
name = var.name
task_cpu = var.ecs_task_cpu
task_memory = var.ecs_task_memory
launch_type = "FARGATE"
network_mode = "awsvpc"
vpc_id = var.vpc_id
platform_version = var.ecs_platform_version
scheduling_strategy = "REPLICA"
propagate_tags = "SERVICE"
assign_public_ip = "false"
task_exec_role_arn = aws_iam_role.fargate_execution.arn
subnet_ids = var.private_subnet_ids
security_group_ids = [aws_security_group.this.id]
alb_security_group = module.alb.security_group_id
tags = local.tags
attributes = local.attributes
container_port = var.container_port
delimiter = local.delimiter
deployment_controller_type = "ECS"
deployment_maximum_percent = var.deployment_maximum_percent
deployment_minimum_healthy_percent = var.deployment_minimum_healthy_percent
desired_count = var.desired_count
ecs_cluster_arn = aws_ecs_cluster.cluster.arn
health_check_grace_period_seconds = 10
ignore_changes_task_definition = "false"

ecs_load_balancers = [{
"elb_name" = "",
"container_name" = var.name,
"container_port" = var.container_port,
"target_group_arn" = module.alb.default_target_group_arn,
}]

container_definition_json = jsonencode([
module.webportal_task_definition.json_map_object,
module.webportal_middleware_task_definition.json_map_object,
])

thank you in advance

Vasilios Tzanoudakis

Support old (not long) arns with an option to NOT tag the ecs service

Have a question? Please checkout our Slack Community in the #geodesic channel or visit our Slack Archive.

Slack Community

Describe the Feature

Request modifying this module so it can support skipping tagging of the ECS service if for whatever reason you cannot add long arn support on the account

Expected Behavior

I'm in an old account where we cannot add long arn support. When I apply this module, it fails to create the ecs service because it tries to tag it.

Use Case

Applying module

Describe Ideal Solution

Works with both long arn and old arn accounts.

Alternatives Considered

  • Forking module
  • Not using this module

Additional Context

#39

Error when adding EFS volumes: When the volume parameter is specified, only one volume configuration type should be used.

Error: failed creating ECS Task Definition: ClientException: When the volume parameter is specified, only one volume configuration type should be used.

Terraform plan generates a change to the task definition which adds this:


      volume {

          host_path = "/test123"

          name      = "test123-efs"

 

          efs_volume_configuration {

              file_system_id          = "fs-123123123"

              root_directory          = "/test123"

              transit_encryption      = "ENABLED"

              transit_encryption_port = 2999

 

              authorization_config {

                  access_point_id = "fsap-123123123"

                  iam             = "DISABLED"

                }

            }

What actually works in AWS (by directly editing the JSON of a task), is this:


 

    "volumes": [

    {

      "fsxWindowsFileServerVolumeConfiguration": null,

      "efsVolumeConfiguration": {

        "transitEncryptionPort": null,

        "fileSystemId": "fs-123123123",

        "authorizationConfig": {

          "iam": "DISABLED",

          "accessPointId": "fsap-123123123"

        },

        "transitEncryption": "ENABLED",

        "rootDirectory": "/"

      },

      "name": "efs2",

      "host": null,

      "dockerVolumeConfiguration": null

    }

  ]

You can see the attribute names are slightly different, I'm not sure how much of that terraform translates, and how much is pointing to a bug with the module?

Currently on version 0.66.2 of the module, with terraform 1.3.2. Pretty confused since we do only have one volume configuration type set.

A managed resource "aws_security_group" "ecs_service" has not been declared in module.ecs_alb_service_task.

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

  on .terraform/modules/ecs_alb_service_task/main.tf line 435, in resource "aws_ecs_service" "ignore_changes_task_definition_and_desired_count":
 435:       security_groups  = compact(concat(var.security_group_ids, aws_security_group.ecs_service.*.id))

A managed resource "aws_security_group" "ecs_service" has not been declared in
module.ecs_alb_service_task.

Releasing state lock. This may take a few moments...

Suggested change

#140 PR here.

security_groups = compact(concat(var.security_group_ids, aws_security_group.ecs_service.*.id))
find and replace with
security_groups = compact(concat(module.security_group.*.id, var.security_groups))

Why!!!?

Looks like there's a bug in the code if you use the ignore desired_count and task_definition flags! 😨

Setting var.ignore_changes_task_definition or var.ignore_changes_desired_count or both(?) to true results in and error:

A managed resource "aws_security_group" "ecs_service" has not been declared in
module.ecs_alb_service_task.

I think that error makes sense as we don't have a aws_security_group.ecs_service resource in the module.

I took at look at the code further down at (the default)

and that (and this is what the tests use) references the module security_group: module.security_group.*.id

Which does exist 👍

Expose uniqueid of task_exec role

Describe the Feature

An output of the unique IDs for the two roles created for the service task:
task_exec_role_uniqueid

Expected Behavior

When using this module, I can access the uniqueid (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role#unique_id) field on the task_exec role.

Use Case

We use the unique ids for permissions for secrets. Since I cannot access the uniqueid on the task_exec role, we have to do a chicken and an egg scenario of trying to access these fields through a data source:

# These are needed because the cloudposse module does not expose the roles unique ids
data "aws_iam_role" "task_exec_role" {
  name = module.ecs_alb_service_task.task_exec_role_name
}

Describe Ideal Solution

As stated above, just an output of the uniqueid for the task_exec role

Multi-container task definition

Multi-container task definition

The current module implementation assumes one container per tast definition. AWS however allows multiple container definitions per single tast. terraform-aws-ecs-alb-service-task Module demonstrates how multiple container definitions can be combined to achieve this.

It is technically possible to use an array of definitions as the argument for container_definition_json. For example:

module "ecs_alb_service_task" {
    ...
    container_definition_json = "[${module.first_container.json_map_encoded},${module.second_container.json_map_encoded}]"
    ...
  }

This however seems more like a workaround rather than expected solution.

Expected Behavior

A clear and concise description of what you expected to happen.

Use Case

We are deploying an appilcation consisting of nginx, php-fpm, and nodejs (frontend api). All these containers composit one application and must behave as one and scale as one unit.

Describe Ideal Solution

Ideally module definition would accept container_definitions (emphasis trailing s) instead of container_definition. For example:

module "ecs_alb_service_task" {
    ...
    container_definitions_json = [module.first_container.json, module.second_container.json]
    ...
  }

Variable descriptions should reflect this ability and provide examples.

Alternatives Considered

The two alternatives considered are: 1) use multiple services (single container per task, service) 2) stuff multiple definitions into container_definition_json.

Solution (1) is not quite applicable to our use-case due to extreme additional complexity. Solution (2) would work, however it feels more like shoehorning.

Additional Context

I would be interested in working on a MR if this sounds like a reasonable project. Are there any issues in implementing this?

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

This repository currently has no open or pending branches.

Detected dependencies

terraform
main.tf
  • cloudposse/label/null 0.25.0
  • cloudposse/label/null 0.25.0
  • cloudposse/label/null 0.25.0
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
  • undefined no version found
versions.tf
  • hashicorp/terraform >= 0.13.0
  • aws >= 3.69
  • local >= 1.3
  • null >= 2.0

  • Check this box to trigger a request for Renovate to run again on this repository

Supply a custom task iam role

Describe the Feature

var.task_role_arn would be used for the service task instead of a new task role created.

Expected Behavior

It would be nice if we could supply our own custom task iam role.

Use Case

We already have a task iam role present for our task. We want to convert the task to fargate and have decided to use this module, however, this module doesn't have a way of reusing an existing task role and if the name is different, it has to be recreated.

Describe Ideal Solution

It would be nice if we could supply our own custom task iam role.

Alternatives Considered

N/A

Additional Context

N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.