Giter Club home page Giter Club logo

aws-ecs-orb's Introduction

AWS ECS Orb

CircleCI Build Status CircleCI Orb Version GitHub License CircleCI Community

A CircleCI Orb to simplify deployments to Amazon Elastic Container Service (ECS). Supports EC2 and Fargate launch type deployments.

Resources

CircleCI Orb Registry Page - The official registry page of this orb for all versions, executors, commands, and jobs described.

CircleCI Orb Docs - Docs for using and creating CircleCI Orbs.

Examples

Please visit the orb registry listing for usage examples and guidelines.

How to Contribute

We welcome issues to and pull requests against this repository!

For further questions/comments about this or other orbs, visit the Orb Category of CircleCI Discuss.

aws-ecs-orb's People

Contributors

a10waveracer avatar alekhrycaiko avatar ashishpatelcs avatar bharatr21 avatar brivu avatar chrishelgert avatar codingdiaz avatar dependabot[bot] avatar dgeorges avatar enokawa avatar geol86 avatar grimlock591 avatar hungrybear88 avatar iynere avatar jaryt avatar jeffnappi avatar joev492 avatar kyletryon avatar lokst avatar mikkopiu avatar mislavcimpersak avatar paulocen avatar sagarvd01 avatar strikerrus avatar stringbeans avatar taxonomic-blackfish avatar timorme avatar uraway avatar zaru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-ecs-orb's Issues

Inconsistent parameter types with respect to other AWS orbs

AWS orbs that authenticate with secrets such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY typically use the type env_var_name for commands and jobs accepting those parameters. This orb uses the string type. If a user is importing both the ecr orb and this orb together (which is quite typical) the inconsistency can be confusing and lead to frustrating configuration errors. Once a user discovers the source of error, they are likely to loose confidence in first party orb offerings.

In summary the ecr orb, s3 orb, and the aws-cli orb all use the type env_var_name for the authentication parameters of jobs and commands. This orb, ecs, uses strings.

The following issues is related and an example of the confusion that can be caused: #9

Warn if container is not specified in container-image-name-updates

We recently saw an issue where aws-ecs/deploy-service-update was not updating the image tag in our task definition. After some debugging, we found the cause was that we were not specifying the container in container-image-name-updates.

Our config was:

- aws-ecs/deploy-service-update:
          name: deploy
          family: <our-service-here>
          cluster-name: <our-cluster-here>
          container-image-name-updates: tag=${CIRCLE_SHA1}
          verify-revision-is-deployed: true
          requires:
            - build_and_publish
            - test
          filters:
            branches:
              only: master

Building with this configuration succeeded, but it was effectively a no-op and simply cloned the previous task definition. Since container is a required part of this configuration, the orb should warn and possibly fail if it is omitted.

want `update-service` without doing `update-task-definition` step

What would you like to be added

Currently update-service always do update-task-definition before updating service.
It's better to make this step optional.

Why is this needed

For example, my use case is this:

  1. run update-task-definition
  2. run run-task using updated TaskDefinition to do database migration job, giving override option which specifies migration command.
  3. run update-service

In this case, we don't need update-task-definition in update-service because it is already updated in previous step.

aws-ecs-orb

Orb version

  • aws-ecs/update-task-definition:
    family: 'api'
    container-image-name-updates: 'container=api-server,image-and-tag=${AWS_ECR_ACCOUNT_URL}/api:${CIRCLE_SHA1}'

What happened

It looks like there may have been some issue with the value of ${CCI_ORB_AWS_ECS_CONTAINER_DEFS} that was created by the orb , I think it would be good if we could take a look at the job

Expected behavior

it's not supposed to be --container-definitions=$VALUE instead of --container-definitions $VALUE

update README.md for "Jobs" & "Commands".

What would you like to be added

There must be description of Jobs & Commands of this orb to understand functionality in better way.

Why is this needed

Complete capabilities & features are not clear by seeing at current README.md.

verification-timeout fails if greater than 10m

Orb version

1.3.0

What happened

circleci default timeout for no standard output is 10minutes. This orb runs aws cli "wait" command and does not return output while validating.

Setting no_output_timeout: XXm does not work as it's not possible to include in run command as required

Expected behavior

Task should wait longer than 10minutes with no issues.

Not sure or I would have submitted a PR. Can do a for loop in the bash if one converts *m - minutes.

aws-ecs/deploy-service-update job clears the task definition app mesh proxy settings

Task definition before running the job:

{
  "taskDefinition": {
    "taskDefinitionArn": "arn:aws:ecs:eu-west-1:xxxxxxxxxxxxx:task-definition/demo-todo:18",
    "containerDefinitions": [
      {
        "name": "demo-todo",
        "image": "xxxxxxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/demo/demo-todo-service:783ca12af20f34ff6f06f6408a94f41d4095ae34",
        "cpu": 0,
        "portMappings": [
          {
            "containerPort": 5002,
            "hostPort": 5002,
            "protocol": "tcp"
          }
        ],
        "essential": true,
        "environment": [],
        "mountPoints": [],
        "volumesFrom": [],
        "dependsOn": [
          {
            "containerName": "envoy",
            "condition": "HEALTHY"
          }
        ],
        "logConfiguration": {
          "logDriver": "awslogs",
          "options": {
            "awslogs-group": "/ecs/demo-todo",
            "awslogs-region": "eu-west-1",
            "awslogs-stream-prefix": "ecs"
          }
        },
        "healthCheck": {
          "command": ["CMD-SHELL", "curl -f http://localhost:5002/api/instance || exit 1"],
          "interval": 5,
          "timeout": 5,
          "retries": 3,
          "startPeriod": 5
        }
      },
      {
        "name": "envoy",
        "image": "111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.1.0-prod",
        "cpu": 0,
        "memory": 500,
        "portMappings": [],
        "essential": true,
        "environment": [
          {
            "name": "APPMESH_VIRTUAL_NODE_NAME",
            "value": "mesh/demo-api-mesh/virtualNode/demo-todo-vn"
          },
          {
            "name": "ENABLE_ENVOY_XRAY_TRACING",
            "value": "1"
          },
          {
            "name": "ENVOY_LOG_LEVEL",
            "value": "debug"
          }
        ],
        "mountPoints": [],
        "volumesFrom": [],
        "user": "1337",
        "healthCheck": {
          "command": ["CMD-SHELL", "curl -s http://localhost:9901/server_info | grep state | grep -q LIVE"],
          "interval": 5,
          "timeout": 2,
          "retries": 3,
          "startPeriod": 10
        }
      },
      {
        "name": "xray-daemon",
        "image": "amazon/aws-xray-daemon",
        "cpu": 0,
        "portMappings": [
          {
            "containerPort": 2000,
            "hostPort": 2000,
            "protocol": "udp"
          }
        ],
        "essential": true,
        "environment": [],
        "mountPoints": [],
        "volumesFrom": [],
        "dependsOn": [
          {
            "containerName": "envoy",
            "condition": "HEALTHY"
          }
        ],
        "logConfiguration": {
          "logDriver": "awslogs",
          "options": {
            "awslogs-group": "/ecs/demo-todo",
            "awslogs-region": "eu-west-1",
            "awslogs-stream-prefix": "ecs"
          }
        }
      }
    ],
    "family": "demo-todo",
    "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxxx:role/demo-api-test-ecs-task",
    "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxxx:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "revision": 18,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
      {
        "name": "ecs.capability.execution-role-awslogs"
      },
      {
        "name": "com.amazonaws.ecs.capability.ecr-auth"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
      },
      {
        "name": "com.amazonaws.ecs.capability.task-iam-role"
      },
      {
        "name": "ecs.capability.aws-appmesh"
      },
      {
        "name": "ecs.capability.container-health-check"
      },
      {
        "name": "ecs.capability.execution-role-ecr-pull"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
      },
      {
        "name": "ecs.capability.task-eni"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
      },
      {
        "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
      },
      {
        "name": "ecs.capability.container-ordering"
      }
    ],
    "placementConstraints": [],
    "compatibilities": ["EC2", "FARGATE"],
    "requiresCompatibilities": ["FARGATE"],
    "cpu": "512",
    "memory": "1024",
    "proxyConfiguration": {
      "type": "APPMESH",
      "containerName": "envoy",
      "properties": [
        {
          "name": "ProxyIngressPort",
          "value": "15000"
        },
        {
          "name": "AppPorts",
          "value": "5002"
        },
        {
          "name": "EgressIgnoredIPs",
          "value": "169.254.170.2,169.254.169.254"
        },
        {
          "name": "IgnoredGID",
          "value": ""
        },
        {
          "name": "EgressIgnoredPorts",
          "value": ""
        },
        {
          "name": "IgnoredUID",
          "value": "1337"
        },
        {
          "name": "ProxyEgressPort",
          "value": "15001"
        }
      ]
    }
  }
}

Task definition after running the job:

{
  "taskDefinition": {
    "taskDefinitionArn": "arn:aws:ecs:eu-west-1:xxxxxxxxxxxxx:task-definition/demo-todo:19",
    "containerDefinitions": [
      {
        "name": "demo-todo",
        "image": "xxxxxxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/demo/demo-todo-service:18afddee681018a172eb3bf52f04dba59bc17b92",
        "cpu": 0,
        "portMappings": [
          {
            "containerPort": 5002,
            "hostPort": 5002,
            "protocol": "tcp"
          }
        ],
        "essential": true,
        "environment": [],
        "mountPoints": [],
        "volumesFrom": [],
        "dependsOn": [
          {
            "containerName": "envoy",
            "condition": "HEALTHY"
          }
        ],
        "logConfiguration": {
          "logDriver": "awslogs",
          "options": {
            "awslogs-group": "/ecs/demo-todo",
            "awslogs-region": "eu-west-1",
            "awslogs-stream-prefix": "ecs"
          }
        },
        "healthCheck": {
          "command": ["CMD-SHELL", "curl -f http://localhost:5002/api/instance || exit 1"],
          "interval": 5,
          "timeout": 5,
          "retries": 3,
          "startPeriod": 5
        }
      },
      {
        "name": "envoy",
        "image": "111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.1.0-prod",
        "cpu": 0,
        "memory": 500,
        "portMappings": [],
        "essential": true,
        "environment": [
          {
            "name": "APPMESH_VIRTUAL_NODE_NAME",
            "value": "mesh/demo-api-mesh/virtualNode/demo-todo-vn"
          },
          {
            "name": "ENABLE_ENVOY_XRAY_TRACING",
            "value": "1"
          },
          {
            "name": "ENVOY_LOG_LEVEL",
            "value": "debug"
          }
        ],
        "mountPoints": [],
        "volumesFrom": [],
        "user": "1337",
        "healthCheck": {
          "command": ["CMD-SHELL", "curl -s http://localhost:9901/server_info | grep state | grep -q LIVE"],
          "interval": 5,
          "timeout": 2,
          "retries": 3,
          "startPeriod": 10
        }
      },
      {
        "name": "xray-daemon",
        "image": "amazon/aws-xray-daemon",
        "cpu": 0,
        "portMappings": [
          {
            "containerPort": 2000,
            "hostPort": 2000,
            "protocol": "udp"
          }
        ],
        "essential": true,
        "environment": [],
        "mountPoints": [],
        "volumesFrom": [],
        "dependsOn": [
          {
            "containerName": "envoy",
            "condition": "HEALTHY"
          }
        ],
        "logConfiguration": {
          "logDriver": "awslogs",
          "options": {
            "awslogs-group": "/ecs/demo-todo",
            "awslogs-region": "eu-west-1",
            "awslogs-stream-prefix": "ecs"
          }
        }
      }
    ],
    "family": "demo-todo",
    "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxxx:role/demo-api-test-ecs-task",
    "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxxx:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "revision": 19,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
      {
        "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
      },
      {
        "name": "ecs.capability.execution-role-awslogs"
      },
      {
        "name": "com.amazonaws.ecs.capability.ecr-auth"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
      },
      {
        "name": "com.amazonaws.ecs.capability.task-iam-role"
      },
      {
        "name": "ecs.capability.container-health-check"
      },
      {
        "name": "ecs.capability.container-ordering"
      },
      {
        "name": "ecs.capability.execution-role-ecr-pull"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
      },
      {
        "name": "ecs.capability.task-eni"
      },
      {
        "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
      }
    ],
    "placementConstraints": [],
    "compatibilities": ["EC2", "FARGATE"],
    "requiresCompatibilities": ["FARGATE"],
    "cpu": "512",
    "memory": "1024"
  }
}

proxyConfiguration is missing after the task definition is being updated by the job.

Readme examples out of date

What would you like to be added

Feature request here is to resync the documentation to reflect the current major version and to describe how to build a container and deploy it using AWS ECR and ECS path.

As a new user to CircleCI the Readme's are very helpful, they really showcase the features and help you get started. However once they become out of date it becomes difficult to use.

Also its not clear what ECR / ECS orbs are compatable with each other.

Why is this needed

Readme and guides should match up with the current major release version so you end up using the current patterns and lower the number of inconsistancies.

For example im seeing different ways of naming the environment variables and How container repo's are tagged. Older versions use CIRCLE_SHA1 for tagging repo's.

separate config for service name

It looks as though this orb has a requirement for the service name and task definition name to be the same... for example, if i set family to demo-web then it will assume that both the service AND task definition is named demo-web

is there currently any way to separate it?

Why the restriction on container-env-var-updates?

Hi again!

Just curious why when using container-env-var-updates it only updates the environment variables defined in the previous task definition. I believe that we can inject new environment variables regardless of whether they were defined previously.. (correct me if im wrong)

I'm trying to understand the rationale behind this. Would your team be open to removing the restriction?

Need to return the taskarn from run_task command

What would you like to be added

I am working on polling the task status that is run by run_task. Now i am not able to get the task arn. I tried to use aws-cli to get the task arn that is just initiated by the the run_task. But no command seems good for my usecase. So it will be much better if i can just get the taskarn from the run_task command in BASH_ENV.

Why is this needed

It will make it much easy as straightforward to get the logs and poll the task which is intiated by run_task command.

Error: Root Directory can not be set when using an EFS Access Point

Error: Root Directory can not be set when using an EFS Access Point

What would you like to be added

I would like to specify efsVolumeConfiguration with deploy-service-update job.

Why is this needed

Currently, I'm having a trouble with registering a task definition. I do understand that root directory should be empty if I'm using an access point, however AWS adds it once new revision is created.
Screen Shot 2020-06-01 at 16 34 32
Screen Shot 2020-06-01 at 16 42 35

Support launching single task

I would like to use this to be able to kick off a single ECS task. And then preferably retrieve the exit code and logs from the task.

missing `$` at the beginning of the environment variables

If we put some aws properties as below script, it doesn't set $ at the beginning of the environment variables. It seems aws-ecr/build_and_push_image can set the environment variables without $. I think it's better to be able to set environment variables the same way.

      - aws-ecs/deploy-service-update:
          requires:
            - aws-ecr/build_and_push_image
          aws-access-key-id: AWS_ACCESS_KEY
          aws-secret-access-key: AWS_SECRET_ACCESS_KEY
          aws-region: AWS_REGION

    - run:
        name: Configure AWS Access Key ID
        command: |
          aws configure set aws_access_key_id \
          AWS_ACCESS_KEY \
          --profile default
    - run:
        name: Configure AWS Secret Access Key
        command: |
          aws configure set aws_secret_access_key \
          AWS_SECRET_ACCESS_KEY \
          --profile default
    - run:
        name: Configure AWS default region
        command: |
          aws configure set region AWS_REGION \
          --profile default

Skip waiting for NUM_DEPLOYMENTS=1

What would you like to be added

Add skip-draining option to verify-revision-is-deployed.
It'll skip $NUM_DEPLOYMENTS = 1 check.

Why is this needed

ECS Service registered to ELB will wait for deregistration delay, which defaults to 5min, even if there is already no connection. It'll effectively delays job success for 5 mins.

Cannot find a definition for command named aws-ecs/deploy-service-update

I am getting this error when trying to use aws-ecs/deploy-service-update
Cannot find a definition for command named aws-ecs/deploy-service-update

Whats weird is that I can use aws-ecs/update-service but none of the jobs in the aws-ecs orb are available...

version: 2.1
orbs:
    utils:
        orbs:
            aws-ecr: circleci/[email protected]
            aws-ecs: circleci/[email protected]
        executors:
            ci:
                docker:
                    - image: $AWS_ECR_ACCOUNT_URL/ci:latest
            node:
                docker:
                    - image: circleci/node:10
        jobs:
            dependencies:
                description: Get deps and persist to workspace
                executor: node
                resource_class: large
                steps:
                    - checkout
                    - run:
                          name: Authenticate with registry
                          command: echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" > .npmrc
                    - run: yarn --frozen-lockfile
                    - persist_to_workspace:
                          root: .
                          paths:
                              - node_modules
                              - projects/*/node_modules
                              - packages/*/node_modules
                              - .npmrc
                    - run:
                          name: Ensure repo is clean from uncommitted changes, likely yarn.lock is out of date
                          command: git diff --quiet || exit 1
            release-packages:
                description: Build and release local dependency packages
                executor: node
                resource_class: large
                steps:
                    - checkout
                    - attach_workspace:
                          at: .
                    - run: ./node_modules/.bin/lerna publish from-package --yes

            build-image:
                description: Build docker image
                executor: ci
                resource_class: medium
                parameters:
                    project:
                        description: What project to build
                        type: string
                    project_dir:
                        description: Project's parent directory
                        type: string
                steps:
                    - checkout
                    - setup_remote_docker:
                          docker_layer_caching: true
                    - restore_cache:
                          keys:
                              - projects-cache-v1-{{ .Environment.CIRCLE_SHA1 }}
                    - run:
                          command: |
                              cp yarn.lock << parameters.project_dir >>/<< parameters.project >>
                    - aws-ecr/build-and-push-image:
                          extra-build-args: '--build-arg NPM_TOKEN=$NPM_TOKEN'
                          repo: '${AWS_RESOURCE_NAME_PREFIX}'
                          tag: '<< parameters.project >>-${CIRCLE_SHA1}'
                          dockerfile: '<< parameters.project_dir >>/<< parameters.project >>/Dockerfile'
                          path: '<< parameters.project_dir >>/<< parameters.project >>'
                          workspace-root: '<< parameters.project_dir >>/<< parameters.project >>'
            deploy-service:
                description: Build docker image
                executor: ci
                resource_class: medium
                parameters:
                    project:
                        description: What project to build
                        type: string
                    project_dir:
                        description: Project's parent directory
                        type: string
                steps:
                    - checkout
                    - setup_remote_docker:
                          docker_layer_caching: true
                    - restore_cache:
                          keys:
                              - projects-cache-v1-{{ .Environment.CIRCLE_SHA1 }}
                    - aws-ecs/deploy-service-update:
                          aws-region: ${AWS_DEFAULT_REGION}
                          family: '${AWS_RESOURCE_NAME_PREFIX}-service'
                          cluster-name: '${AWS_RESOURCE_NAME_PREFIX}-cluster'
                          container-image-name-updates: container=${AWS_RESOURCE_NAME_PREFIX}-service,tag=<< parameters.project >>-${CIRCLE_SHA1}"

workflows:
    build-and-deploy:
        jobs:
            - utils/dependencies:
                  name: dependencies
                  context: GuiltySpark
            - utils/release-packages:
                  name: release-packages
                  filters:
                      branches:
                          only:
                              - master
                  requires:
                      - dependencies
            - utils/build-image:
                  name: build-web
                  context: GuiltySpark
                  project: web
                  project_dir: projects
                  requires:
                      - release-packages
            - utils/build-image:
                  name: build-graphql
                  context: GuiltySpark
                  project: graphql
                  project_dir: projects
                  requires:
                      - release-packages
            - utils/deploy-service:
                  name: deploy-graphql
                  context: GuiltySpark
                  project: web
                  project_dir: projects
                  requires:
                      - build-web

Tags variable usage missing $

Orb version

1.1.0

What happened

Trying to use aws-ecs-orb/update-service in order to perform an update-service on a blue-green deployment. But, I fail at the step when grabbing my previous task definition, e.g. of step name Retrieve previous task definition and prepare new task definition values

The error I get is

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

Unknown options: --include, TAGS

The problem I'm now having, is when I look through the code for how this job is run; I'm left quite confused to the implementation of TAGS.

It's exported to an environment variable here:

TAGS=$(python $GET_TASK_DFN_VAL_SCRIPT_FILE 'tags' "$PREVIOUS_TASK_DEFINITION")

But, the keyword TAGS is not used as an environment variable here (instead there's no $ usage).
Example of that is here:

PREVIOUS_TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition << parameters.family >> --include TAGS)

Expected behavior

Clarity around why my failure is happening, as I can't currently decipher it between the errors thrown, or the source code.

Unexpected argument(s): aws-region, aws-access-key-id, aws-secret-access-key

Orb version

1.2.0

What happened

During update-task-definition, when aws-access-key-id, aws-secret-access-key and aws-region are specified, it throws an error:

Error calling workflow: 'build-and-deploy'
Error calling job: 'update_task_definition'
Error calling command: 'aws-ecs/update-task-definition'
Unexpected argument(s): aws-region, aws-access-key-id, aws-secret-access-key

Expected behavior

Not throwing an error and accept the variables as specified in the documentation.

Support Blue/Green Deployment

I would like to use Blue/Green Deployment.
I tryied AWS ECS Orb(circleci/[email protected]) but not supported Blue/Green Deployment.

This is CircleCI Output.(Update service with registered task definition)

#!/bin/bash -eo pipefail
SERVICE_NAME="$(echo sample)"

if [ -z "${SERVICE_NAME}" ]; then
    SERVICE_NAME="$(echo sample-app)"
fi
DEPLOYED_REVISION=$(aws ecs update-service \
    --cluster "sample-cluster" \
    --service "${SERVICE_NAME}" \
    --task-definition "${CCI_ORB_AWS_ECS_REGISTERED_TASK_DFN}" \
    --output text \
    --query service.taskDefinition)
echo "export CCI_ORB_AWS_ECS_DEPLOYED_REVISION='${DEPLOYED_REVISION}'" >> $BASH_ENV

An error occurred (InvalidParameterException) when calling the UpdateService operation: Unable to update task definition on services with a CODE_DEPLOY deployment controller. Use AWS CodeDeploy to trigger a new deployment.
Exited with code 255

Update service with registered task definition

Error parsing parameter '--container-definitions': Expected: '=', received: 'EOF' for input:

Hello,

I'm running into trouble when using the update-task-definition command. I keep getting this error, and I'm having a hard time figuring out why:

REVISION=$(aws ecs register-task-definition \
    --family api \
    --container-definitions "${CCI_ORB_AWS_ECS_CONTAINER_DEFS}" \
    "$@" \
    --output text \
    --query 'taskDefinition.taskDefinitionArn')
echo "Registered task definition: ${REVISION}"
echo "export CCI_ORB_AWS_ECS_REGISTERED_TASK_DFN='${REVISION}'" >> $BASH_ENV

/tmp/.bash_env-5cd1e0d546a0c30008435cb0-0-build: line 4: unexpected EOF while looking for matching `"'
/tmp/.bash_env-5cd1e0d546a0c30008435cb0-0-build: line 10: syntax error: unexpected end of file

Error parsing parameter '--container-definitions': Expected: '=', received: 'EOF' for input:

^
Exited with code 255

Here's how I'm using the command in my config.yml:

- aws-ecs/update-task-definition:
          family: 'api'
          container-image-name-updates: 'container=api-server,image-and-tag=${AWS_ECR_ACCOUNT_URL}/api:${CIRCLE_SHA1}'

And here's the task definition that I'm trying to update:

{
  "ipcMode": null,
  "executionRoleArn": null,
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "logConfiguration": null,
      "entryPoint": null,
      "portMappings": [
        {
          "hostPort": 0,
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "command": null,
      "linuxParameters": null,
      "cpu": 0,
      "environment": [ /* ... */ ],
      "resourceRequirements": null,
      "ulimits": null,
      "dnsServers": null,
      "mountPoints": [],
      "workingDirectory": null,
      "secrets": null,
      "dockerSecurityOptions": null,
      "memory": 256,
      "memoryReservation": null,
      "volumesFrom": [],
      "stopTimeout": null,
      "image": "...",
      "startTimeout": null,
      "dependsOn": null,
      "disableNetworking": null,
      "interactive": null,
      "healthCheck": null,
      "essential": true,
      "links": null,
      "hostname": null,
      "extraHosts": null,
      "pseudoTerminal": null,
      "user": null,
      "readonlyRootFilesystem": null,
      "dockerLabels": null,
      "systemControls": null,
      "privileged": null,
      "name": "api-server"
    }
  ],
  "placementConstraints": [],
  "memory": null,
  "taskRoleArn": null,
  "compatibilities": [
    "EC2"
  ],
  "taskDefinitionArn": "...",
  "family": "api",
  "requiresAttributes": [
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.ecr-auth"
    }
  ],
  "pidMode": null,
  "requiresCompatibilities": [],
  "networkMode": null,
  "cpu": null,
  "revision": 43,
  "status": "ACTIVE",
  "proxyConfiguration": null,
  "volumes": []
}

Any help would be much appreciated!

P.S. This is a dope orb.

Allow passing DockerHub credentials

What would you like to be added

We'll need a way to pass the auth block to the docker configuration parameter (docs). I see that you can customize the docker image, but you can't pass auth credentials:

aws-ecs-orb/src/orb.yml.hbs

Lines 390 to 391 in 0f7fce6

docker:
- image: << parameters.docker-image-for-job >>

Why is this needed

Starting November 1, 2020, DockerHub will begin rate-limiting image pulls from DockerHub. Users need a way to pass their DockerHub credentials to avoid the rate limit. Without providing this configuration option, deployments from CircleCI will be impacted.

Workaround

I was able to work around this by recreating the steps from the job:

deploy-service-update:
  # Use upstream image
  # https://github.com/CircleCI-Public/aws-ecs-orb/blob/0f7fce6f951ff6ccdcf76538efa781eef9efb5f0/src/orb.yml.hbs#L181-L185
  docker:
    - image: circleci/python:3.7.1
	  # !!! This is the important part that I can't do today
      auth:
        username: $DOCKERHUB_USERNAME
        password: $DOCKERHUB_ACCESS_TOKEN
  steps:
    - aws-cli/install
    - aws-cli/setup
    - aws-ecs/update-service:
        cluster-name: "my-cluster"
        service-name: "my-service"
        family: "MyTaskDef"
        container-image-name-updates: "container=service,tag=my-service-${CIRCLE_SHA1}"

However, a parameter would be nice to use instead.

No defaults for launch type on consumption of `run-task`

What would you like to be added

On ecs run-task, I'd like to have a launch-type configuration that has no default values.

Currently the ECS run-task CLI doesn't require this launch-type parameter by default. However, the orb implementation requires this, and defaults to a Fargate value - which doesn't make sense for users who use EC2 launch types.

Why is this needed

For users who use launch-type of EC2, the current default value creates implementation friction - as the launch-type needs to be overridden. Default values being Fargate here will probably cause some errors for future users that don't want their tasks run under Fargate launch type.

Deploying updates to Schedule Tasks

What would you like to be added

I'd like to be able to use this Orb to deploy updates to Scheduled Tasks in addition to ECS Services.

I've built this as a separate Orb for now. All the heavy lifting is provided by this Orb already; I just added an additional step to call aws events put-targets. I'm wondering if,

  • You'd accept it as PR into this repository, and if so
  • Is there anything I should know in adapting my linked to code for integrating here?

Why is this needed

Scheduled Tasks are a very common way to deploy time-triggered actions on ECS that run your same Task Definitions as anything else. Supporting this directly in the official "aws-ecs" Orb makes more sense then something separate and un-official like what I have now.

No such file or directory: 'less': 'less'

Orb version

1.1.0

What happened

The output of the CI task is not successful, but the job has ran just fine.

config.yml

  run-collectstatic:
    docker:
      - image: circleci/python:3.6.8
    steps:
      - checkout
      - run:
          name: Setup ENV VAR
          command: |
            echo 'export ECS_SERVICE_NAME="${AWS_RESOURCE_NAME_PREFIX}-service"' >> $BASH_ENV
            echo 'export FULL_IMAGE_NAME="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/myrepo:${CIRCLE_SHA1}"' >> $BASH_ENV
      - run:
          name: Setup BRANCH
          command: |
            if [ $CIRCLE_BRANCH == "master" ]; then echo 'export CURRENT_BRANCH="production"' >> $BASH_ENV; else echo 'export CURRENT_BRANCH="${CIRCLE_BRANCH}"' >> $BASH_ENV; fi
      - run:
          name: Setup ECS
          command: |
            if [ $CIRCLE_BRANCH == "master" ]; then echo 'export ECS_CLUSTER_NAME="my-production-cluster"' >> $BASH_ENV; else echo 'export ECS_CLUSTER_NAME="my-${CIRCLE_BRANCH}-cluster"' >> $BASH_ENV; fi
      - aws-cli/install
      - aws-cli/setup
      - run:
          name: Get last task definition
          command: >
            STATIC_TASK_DEFINITION_ARN=$(aws ecs describe-task-definition \
                --task-definition myTask-${CURRENT_BRANCH} \
                --output text \
                --query 'taskDefinition.taskDefinitionArn')
            echo "export STATIC_TASK_DEFINITION_ARN='${STATIC_TASK_DEFINITION_ARN}'" >> $BASH_ENV

      - aws-ecs/run-task:
          cluster: ${ECS_CLUSTER_NAME}
          task-definition: ${STATIC_TASK_DEFINITION_ARN}
          awsvpc: false
          launch-type: EC2

CI output

#!/bin/bash -eo pipefail
set -o noglob
if [ "EC2" == "FARGATE" ]; then
    echo "Setting --platform-version"
    set -- "$@" --platform-version "LATEST"
fi
if [ ! -z "" ]; then
    echo "Setting --started-by"
    set -- "$@" --started-by ""
fi
if [ ! -z "" ]; then
    echo "Setting --group"
    set -- "$@" --group ""
fi
if [ ! -z "" ]; then
    echo "Setting --overrides"
    set -- "$@" --overrides ""
fi
if [ ! -z "" ]; then
    echo "Setting --tags"
    set -- "$@" --tags ""
fi
if [ ! -z "" ]; then
    echo "Setting --placement-constraints"
    set -- "$@" --placement-constraints ""
fi
if [ ! -z "" ]; then
    echo "Setting --placement-strategy"
    set -- "$@" --placement-strategy ""
fi
if [ "false" == "true" ]; then
    echo "Setting --enable-ecs-managed-tags"
    set -- "$@" --enable-ecs-managed-tags
fi
if [ "false" == "true" ]; then
    echo "Setting --propagate-tags"
    set -- "$@" --propagate-tags "TASK_DEFINITION"
fi
if [ "false" == "true" ]; then
    echo "Setting --network-configuration"
    set -- "$@" --network-configuration awsvpcConfiguration="{subnets=[],securityGroups=[],assignPublicIp=DISABLED}"
fi

aws ecs run-task \
  --cluster ${ECS_CLUSTER_NAME} \
  --task-definition ${TASK_DEFINITION_ARN} \
  --count 1 \
  --launch-type EC2 \
  "$@"


[Errno 2] No such file or directory: 'less': 'less'

Exited with code exit status 255

Expected behavior

The output of the CI task should be successful as the job has ran just fine on ECS.

Allow --cli-input-json in update-task-definition

My previous setup in Jenkins would always use aws ecs register-task-definition with the --cli-input-json parameter to update the previous task definition.

Would be nice if this orb allows for that option as well.

Support cross-account deploys

Problem

I'm using another aws orb that enables me to assume another role with my circleci aws user. However, I was unable to deploy across multiple aws accounts, utilizing cross account roles with this orb.

Proposed fix

Explicitly set the profile for all aws ecs commands and add an optional parameter called profile-name (similar to the aws-cli orb) as a parameter, with a default of...default

I have a fork that I'd be happy to submit as a pull request if there is some consensus on this being the best way to fix this issue.

Steps to reproduce bug

I was able to track this issue down to the aws cli (read: appears to be an aws cli bug, not a bug in this orb). If you create an aws session token, (which is necessary to assume a cross account role) aws commands don't seem to respect this setting unless you explicitly set your profile, even if it is the default. It has the downstream effect of rendering cross account deploys impossible with this orb.

Unfortunately, I was only able to reproduce this on a circleci machine, using the ssh feature. I was unable to reproduce on my local machine.

Here's the simplest way to reproduce within a circleci session

Push an update to a circleci project using this snippet of config:

    steps:
      - run:
          name: Setup common environment variables
          command: |
            echo 'export ENVIRONMENT="<< parameters.env >>"' >> $BASH_ENV
            echo 'export AWS_REGION="<< parameters.aws-region >>"' >> $BASH_ENV
            echo 'export ECS_CLUSTER_NAME="<< parameters.env >>"' >> $BASH_ENV
            echo 'export ECS_SERVICE_NAME="<< parameters.env >>-api"' >> $BASH_ENV
            echo 'export ECS_TASK_NAME="<< parameters.env >>-api"' >> $BASH_ENV
            echo 'export FULL_IMAGE_NAME="${AWS_ACCOUNT_ID}.dkr.ecr.<< parameters.aws-region >>.amazonaws.com/hostcompliance/${AWS_RESOURCE_NAME_PREFIX}:${CIRCLE_SHA1}"' >> $BASH_ENV
      - aws-cli/install
      - run: |
          aws configure set aws_access_key_id ${AWS_ACCESS_KEY} && \
          aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY} && \
          aws iam get-user
      - run: |
          temp_role=$(aws sts assume-role --role-arn "arn:aws:iam::<< parameters.customer-account-id >>:role/DeployFargate" --role-session-name "RoleSession1") && \
          aws configure set aws_access_key_id $(echo $temp_role | jq .Credentials.AccessKeyId | xargs) && \
          aws configure set aws_secret_access_key $(echo $temp_role | jq .Credentials.SecretAccessKey | xargs) && \
          aws configure set aws_session_token $(echo $temp_role | jq .Credentials.SessionToken | xargs)
      - run:
          name: Set default region
          command: |
            aws configure set default.region << parameters.aws-region >>
      - update-service:
          family: "${ECS_TASK_NAME}"
          cluster-name: "${ECS_CLUSTER_NAME}"
          service-name: "${ECS_SERVICE_NAME}"
          container-image-name-updates: "container=${ECS_SERVICE_NAME},image-and-tag=${FULL_IMAGE_NAME}"
          container-env-var-updates: "container=${ECS_SERVICE_NAME},name=ENV,value=${ENVIRONMENT},container=${ECS_SERVICE_NAME},name=AWS_REGION,value=<< parameters.aws-region >>,container=${ECS_SERVICE_NAME},name=VERSION_INFO,value=${CIRCLE_SHA1}_${CIRCLE_BUILD_NUM},container=${ECS_SERVICE_NAME},name=BUILD_DATE,value=\"$(date)\""
          verify-revision-is-deployed: true

This will fail because a task definition doesn't exist. If you ssh to the box, you can try the following:

aws ecs describe-task-definition --task-definition production-api --include TAGS
returns
An error occurred (ClientException) when calling the DescribeTaskDefinition operation: Unable to describe task definition.

aws ecs describe-task-definition --task-definition production-api --include TAGS --profile default
returns

{
    "taskDefinition": {
    ...
    }
}

The crux of the problem here is that by default, the aws cli within a circleci machine is not assuming the cross account role unless you specify the profile in each call.

aws iam get-user yields...

{
    "User": {
        "Path": "/",
        "UserName": "circle-ci-read-only",
        "UserId": "<access key>",
        "Arn": "arn:aws:iam::<primary-account-id>:user/circle-ci-read-only",
        "CreateDate": "2018-10-09T05:48:53Z"
    }
}

where as aws iam get-user --profile default yields a 403 (because my cross account role doesn't have that privilege

An error occurred (AccessDenied) when calling the GetUser operation: User: arn:aws:sts::<customer-account-id>:assumed-role/DeployFargate/RoleSession1 is not authorized to perform: iam:GetUser on resource: user RoleSession1

Cannot dynamically inject in env variables

Hey there!

We want to inject environment variables into the job deploy-service-update or the command update-task-definition

the problem is that circleCI doesnt properly support passing environment variables between different steps of a job... and we currently fetch our environment variables from AWS param store.

it would be really cool if for update-task-definition you could do something like this:

version: 2.1

orbs:
  aws-cli: circleci/[email protected]
  aws-ecs: circleci/[email protected]

jobs:
  update-tag:
    docker:
      - image: circleci/python:3.7.1
    steps:
      - aws-cli/install
      - aws-cli/configure:
          aws-access-key-id: "$AWS_ACCESS_KEY_ID"
          aws-region: "$AWS_REGION"
      - aws-ecs/update-service:
          family: "${MY_APP_PREFIX}-service"
          container-image-name-updates: "container=${MY_APP_PREFIX}-service,tag=stable"
          before: |
            export MY_APP_PREFIX=something_dynamic
workflows:
  deploy:
    jobs:
      - update-tag

Add ability for specifying the platform version on update-service with CodeDeploy

What would you like to be added

It would be nice to have a platform-version parameter on the update-service command so that we can specify a custom platform version for our CodeDeploy deployments. An example would look like the following:

- aws-ecs/update-service:
    # ...
    platform-version: "1.4.0"

I guess you could default this to LATEST or whatever a sensible default is. I'm not familiar with ECS on EC2 so I'm not sure how the platform version behaves there. I am coming from a Fargate background so 1.4.0 is the newest version.

Why is this needed

Currently it isn't possible to update the platform version using this orb due to the way it generates the AppSpec. This would provide an easier method to update from older platform versions to newer platform versions.

Register new task definition fails with "Error parsing parameter ..."

Orb version

orbs:
  aws-ecr: circleci/[email protected]
  aws-ecs: circleci/[email protected]
  aws-cli: circleci/[email protected]
version: 2.1

What happened

With the following config.yml:

orbs:
  aws-ecr: circleci/[email protected]
  aws-ecs: circleci/[email protected]
  aws-cli: circleci/[email protected]
version: 2.1
jobs:
... //other jobs that build and push the containers to ECR


  deploy-to-staging:
    docker:
      - image: alpine:3.7
    steps:
      - run:
          name: Install dependencies
          command: |
            apk add py-pip
            pip install awscli
      - aws-ecs/update-service:
          family: "mytaskdef"
          service-name: "myservice"
          cluster-name: "mycluster"
          container-image-name-updates: "container=container1,image-and-tag=${ECR_URL}/container1:${CIRCLE_SHA1},container=container2,image-and-tag=${ECR_URL}/container2:${CIRCLE_SHA1},container=container3,image-and-tag=${ECR_URL}/container3:${CIRCLE_SHA1}"


workflows:
  version: 2
  build-test-deploy:
    jobs:
...
      - deploy-to-staging:
          context: env.staging
          requires:
            - build-and-push-container-1
            - build-and-push-container-2
            - build-and-push-container-3
...

I get the following error:

Error parsing parameter '--container-definitions': Expected: '=', received: 'EOF' for input:

^

Exited with code exit status 255

For debugging purposes I tried:

  • start the container with SSH
  • put the script from the UI job output of "Register new task definition" into an .sh file
  • source the BASH_ENV file and run the script

Example:

Click to expand

script.sh

#!/bin/sh
if [ -n "${CCI_ORB_AWS_ECS_TASK_ROLE}" ]; then
    set -- "$@" --task-role-arn "${CCI_ORB_AWS_ECS_TASK_ROLE}"
fi
if [ -n "${CCI_ORB_AWS_ECS_EXECUTION_ROLE}" ]; then
    set -- "$@" --execution-role-arn "${CCI_ORB_AWS_ECS_EXECUTION_ROLE}"
fi
if [ -n "${CCI_ORB_AWS_ECS_NETWORK_MODE}" ]; then
    set -- "$@" --network-mode "${CCI_ORB_AWS_ECS_NETWORK_MODE}"
fi
if [ -n "${CCI_ORB_AWS_ECS_VOLUMES}" ] && [ "${CCI_ORB_AWS_ECS_VOLUMES}" != "[]" ]; then
    set -- "$@" --volumes "${CCI_ORB_AWS_ECS_VOLUMES}"
fi
if [ -n "${CCI_ORB_AWS_ECS_PLACEMENT_CONSTRAINTS}" ] && [ "${CCI_ORB_AWS_ECS_PLACEMENT_CONSTRAINTS}" != "[]" ]; then
    set -- "$@" --placement-constraints "${CCI_ORB_AWS_ECS_PLACEMENT_CONSTRAINTS}"
fi
if [ -n "${CCI_ORB_AWS_ECS_REQ_COMP}" ] && [ "${CCI_ORB_AWS_ECS_REQ_COMP}" != "[]" ]; then
    set -- "$@" --requires-compatibilities ${CCI_ORB_AWS_ECS_REQ_COMP}
fi
if [ -n "${CCI_ORB_AWS_ECS_TASK_CPU}" ]; then
    set -- "$@" --cpu "${CCI_ORB_AWS_ECS_TASK_CPU}"
fi
if [ -n "${CCI_ORB_AWS_ECS_TASK_MEMORY}" ]; then
    set -- "$@" --memory "${CCI_ORB_AWS_ECS_TASK_MEMORY}"
fi
if [ -n "${CCI_ORB_AWS_ECS_PID_MODE}" ]; then
    set -- "$@" --pid-mode "${CCI_ORB_AWS_ECS_PID_MODE}"
fi
if [ -n "${CCI_ORB_AWS_ECS_IPC_MODE}" ]; then
    set -- "$@" --ipc-mode "${CCI_ORB_AWS_ECS_IPC_MODE}"
fi
if [ -n "${CCI_ORB_AWS_ECS_TAGS}" ] && [ "${CCI_ORB_AWS_ECS_TAGS}" != "[]" ]; then
    set -- "$@" --tags "${CCI_ORB_AWS_ECS_TAGS}"
fi
if [ -n "${CCI_ORB_AWS_ECS_PROXY_CONFIGURATION}" ] && [ "${CCI_ORB_AWS_ECS_PROXY_CONFIGURATION}" != "{}" ]; then
    set -- "$@" --proxy-configuration "${CCI_ORB_AWS_ECS_PROXY_CONFIGURATION}"
fi
REVISION=$(aws ecs register-task-definition \
    --family mytaskdef \
    --container-definitions "${CCI_ORB_AWS_ECS_CONTAINER_DEFS}" \
    "$@" \
    --output text \
    --query 'taskDefinition.taskDefinitionArn')
echo "Registered task definition: ${REVISION}"

command:

94cxxxxb8:~# echo $BASH_ENV
/tmp/.bash_env-5dfxxxxxxxxxxx6ed-0-build
94cxxxxb8:~# . /tmp/.bash_env-5dfxxxxxxxxxxx6ed-0-build && ./script.sh

Then it works and updates the task definition with the correct values. This would suggest that the used parameters for the ORB are correct.

Expected behavior

The ECS task definition and the service are updated.

Any help would be greatly appreciated.

Polling multiple service at once with `verify-revision-is-deployed`

What would you like to be added

- verify-revision-is-deployed:
    services:
      - cluster-name: ...
        service-name: ...
        family-name: ...
        task-definition-arn: ...
      - cluster-name: ...
        service-name: ...
        family-name: ...
        task-definition-arn: ...
      - cluster-name: ...
        service-name: ...
        family-name: ...
        task-definition-arn: ...

Why is this needed

We deploy, verify, then notify for multiple services with a single job of CircleCI. Currently, we have sequential aws-ecs/update-service, then sequential verify-revision-is-deployed, then slack/status steps.

So timeout of a verify-revision-is-deployed step depends on previous verify-revision-is-deployed step.

It can be configured as parallel verify jobs in the workflow but doing so is just wasting computation resources (because it is just "waiting").

proxyConfiguration, tags, pidMode and ipcMode are not copied to new task definition

When making a new revision of an existing task definition, existing proxyConfiguration, tags, pidMode and ipcMode attribute values are not copied over to the new revision. proxyConfiguration seems to be a relatively recent task definition attribute not referenced in older versions (at least as old as 1.16.60) of the AWS CLI.

Reference: https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html

container-env-var-updates trims '=' characters from values

Orb version

1.2.0

What happened

I'm passing a couple of env-vars using command aws-ecs/update-service: as part of a job. In container-env-var-updates I set a variable with a base64 value which ends in =. After successful deployment to ECS, In the task definition JSON view in the AWS console I see that my = signs at the end are trimmed.

I pass the variable this way : container=myContainer,name="someName",value="<base64>"
I used the quotation marks to make sure the value is not messed up because of improper formatting.

Note: Being a base64 value, the = signs are only found at the end of the value. Might be relevant.

Expected behavior

Any = sign included in the value should be seen in the task definition JSON view in the AWS console.

Trouble passing environmental variables to the orb

Orb version

1.1.0, as recommended on the CircleCI Orb listing.

What happened

I’m having issues figuring out how to pass any sort of computed variable into the orb. In my use case, I need to figure out the cluster name, service name (and perhaps other things that I’m not yet aware of) based on the branch that CircleCI is running on. For example, a commit (realistically, a PR merge) on develop branch means that service running on development cluster should be updated, whereas a commit on the master branch would indicate that production environment is to be updated.

I had a similar issue with the ECR orb when I was trying to use the jobs it defines; however, as soon as I switched to the commands it provides, the issue went away and now I can have statements like so:

jobs:
  build-push:
    machine:
      image: ubuntu-1604:201903-01
      docker_layer_caching: true
    steps:
      - run:
          name: Determining Docker registry
          command: |
            if [[ "${CIRCLE_BRANCH}" == "develop" ]] || [[ "${CIRCLE_BRANCH}" == "circleci-integration" ]]; then
              echo "==> Deploying to development"
              echo "export REPOSITORY_NAME=<prefix>-development-app" >> $BASH_ENV
            elif [[ "${CIRCLE_BRANCH}" == "staging" ]]; then
              echo "==> Deploying to staging"
              echo "export REPOSITORY_NAME=<prefix>-staging-app" >> $BASH_ENV
            elif [[ "${CIRCLE_BRANCH}" == "master" ]]; then
              echo "==> Deploying to production"
              echo "export REPOSITORY_NAME=<prefix>-production-app" >> $BASH_ENV
            else
              echo "==> Invalid branch name"
              exit 1
            fi
      - aws-ecr/build-and-push-image:
          repo: "${REPOSITORY_NAME}"
          tag: latest

However, the same adjustments don’t seem to work on the ECS orb:

  deploy:
    machine:
      image: ubuntu-1604:201903-01
      docker_layer_caching: true
    steps:
      - run:
          name: Determining ECS cluster and service
          command: |
            if [[ "${CIRCLE_BRANCH}" == "develop" ]] || [[ "${CIRCLE_BRANCH}" == "circleci-integration" ]]; then
              echo "==> Deploying to development"
              echo "export CLUSTER_NAME=<prefix>-development-cluster" >> $BASH_ENV
              echo "export SERVICE_NAME=<prefix>-development-app" >> $BASH_ENV
            elif [[ "${CIRCLE_BRANCH}" == "staging" ]]; then
              echo "==> Deploying to staging"
              echo "export CLUSTER_NAME=<prefix>-staging-cluster" >> $BASH_ENV
              echo "export SERVICE_NAME=<prefix>-staging-app" >> $BASH_ENV
            elif [[ "${CIRCLE_BRANCH}" == "master" ]]; then
              echo "==> Deploying to production"
              echo "export CLUSTER_NAME=<prefix>-production-cluster" >> $BASH_ENV
              echo "export SERVICE_NAME=<prefix>-production-app" >> $BASH_ENV
            else
              echo "==> Invalid branch name"
              exit 1
            fi
#      - aws-ecs/update-task-definition:
#          family: "${SERVICE_NAME}"
      - aws-ecs/update-service:
          cluster-name: "${CLUSTER_NAME}"
          family: "${SERVICE_NAME}"
          force-new-deployment: true
          verify-revision-is-deployed: true

The errors I’m getting are various. Most recently, I’m getting a You must specify a region. You can also configure your region by running "aws configure". error, even though AWS_REGION env variable is defined in the CircleCI Project Settings page (and clearly considered by the ECR Orb). Some while back, perhaps with a slightly different YAML definition (sorry, I can’t seem to find it though), I was getting the following output:

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]

To see help text, you can run:

  aws help
  aws <command> hel
  aws <command> <subcommand> help

Unknown options: --include, TAGS

Exited with code exit status 255

When I SSH into the failed job and dump environmental variables with the env command, I can’t see SERVICE_NAME nor CLUSTER_NAME (that is the case with both versions of the error message). I think the commands of this Orb (or at the very least, the update-service command) fails to load env variables defined in previous steps; but I don’t know enough about CircleCI to be sure I’m just not doing something wrong…

(One thing I noticed is that the ECR Orb defines a default executor while the ECS one doesn’t – could that be the issue?)

Expected behavior

Consistent behaviour with the ECS Orb where environmental variables defined in previous steps are visible and considered during execution.

Thank you so much for your time reading this long report! I’ll be happy for any guidance of yours because this issue seems to be too much for me alone to tackle…

Document minimum aws version needed

Document minimum aws version needed to support the --include TAGS option which the orb uses, and known machine executor images that do not have a new enough aws version pre-installed. This may help avoid issues like #98.

Perhaps this could go into a "Common Problem Causes" section of the readme.

Failure to retrieve previous task

When updating a task the previous task definition is not found.

Unfortunately the output is not very helpful, and all that I get (aside from python) in the result is:

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument --task-definition: expected one argument
Exited with code 2

Force new deployment flag.

What would you like to be added

the aws ecs update-service command accepts a --force-new-deployment flag or --no-force-new-deployment flag. I currently don't see this feature available in circle_ci docs for this orb under update-service. I'd like to be able to translate my current workflow commands directly into orbs; however the lack of this feature makes this difficult to do.

Why is this needed

Provides more parity between the CLI and Orb in terms of usage.

asterisk in task definition replaced with update_container_defs

Orb version

version 1.0.4

What happened

Used aws-ecs/update-service in a job to deploy a task definition with a command property. The service is a scheduler and takes a flag as an argument specifying a schedule in cron syntax, e.g. -schedule="30 10 * * 4". When the task definition is updated in ecs, the command looks like this -schedule="30 10 _update_container_defs.py.V2o99u _update_container_defs.py.V2o99u 4".

Expected behavior

Ideally, when the task definition is updated in ecs, the command would read -schedule="30 10 * * 4". Seems like this could be an issue due to bash filename expansion -- maybe somewhere around here: https://github.com/CircleCI-Public/aws-ecs-orb/blob/master/src/orb.yml.hbs#L644

Not passing region flag in `aws ecs describe-task-definition` command

Hi,
I am having an issue where it can't find the task definition because the --region is missing in the PREVIOUS_TASK_DEFINITION command.
I have already set the aws-region option and the AWS_DEFAULT_REGION env variable options but with no luck.

Would appreciate some help with this one.

#!/bin/bash -eo pipefail
PREVIOUS_TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition redacted-api)
CONTAINER_IMAGE_NAME_UPDATES="$(echo container=redacted-api,tag=${CIRCLE_SHA1}")"
CONTAINER_ENV_VAR_UPDATES="$(echo )"

bug: update-service does not update image tag

echo $PREVIOUS_TASK_DEFINITION
{
    "taskDefinition": {
        "taskDefinitionArn": "arn:aws:ecs:ap-northeast-1:xxxxxx:task-definition/z-admin-qa1:24",
        "containerDefinitions": [
            {
                "name": "z-admin-qa1",
                "image": "asia.gcr.io/zehitomo-1207/z-admin:master-954cbe7",
                "cpu": 0,
                "memory": 256,
                "portMappings": [
                    {
                        "containerPort": 80,
                        "hostPort": 0,
                        "protocol": "tcp"
                    }
                ],
                "essential": true,
                "environment": [],
                "mountPoints": [],
                "volumesFrom": [],
                "startTimeout": 60,
                "stopTimeout": 60,
                "logConfiguration": {
                    "logDriver": "awslogs",
                    "options": {
                        "awslogs-group": "/ecs/z-admin-qa1",
                        "awslogs-region": "ap-northeast-1",
                        "awslogs-stream-prefix": "ecs"
                    }
                },
                "healthCheck": {
                    "command": [
                        "CMD-SHELL",
                        "curl -f http://localhost/aws/health-check || exit 1"
                    ],
                    "interval": 60,
                    "timeout": 10,
                    "retries": 3,
                    "startPeriod": 60
                }
            }
        ],
        "family": "z-admin-qa1",
        "executionRoleArn": "arn:aws:iam::xxxxxx:role/ecsTaskExecutionRole",
        "networkMode": "bridge",
        "revision": 24,
        "volumes": [],
        "status": "ACTIVE",
        "requiresAttributes": [
            {
                "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
            },
            {
                "name": "ecs.capability.execution-role-awslogs"
            },
            {
                "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
            },
            {
                "name": "ecs.capability.container-health-check"
            },
            {
                "name": "ecs.capability.container-ordering"
            },
            {
                "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
            }
        ],
        "placementConstraints": [],
        "compatibilities": [
            "EC2"
        ],
        "requiresCompatibilities": [
            "EC2"
        ],
        "cpu": "256",
        "memory": "256"
    }
}
echo $CONTAINER_IMAGE_NAME_UPDATES
tag=feature-visualize-request-health-7b3af6b
→ CONTAINER_DEFS=$(python $UPDATE_CONTAINER_DEFS_SCRIPT_FILE "$PREVIOUS_TASK_DEFINITION" "$CONTAINER_IMAGE_NAME_UPDATES" "$CONTAINER_ENV_VAR_UPDATES")echo $CONTAINER_DEFS
[{"environment": [], "name": "z-admin-qa1", "mountPoints": [], "healthCheck": {"retries": 3, "interval": 60, "command": ["CMD-SHELL", "curl -f http://localhost/aws/health-check || exit 1"], "startPeriod": 60, "timeout": 10}, "image": "asia.gcr.io/zehitomo-1207/z-admin:master-954cbe7", "cpu": 0, "portMappings": [{"protocol": "tcp", "containerPort": 80, "hostPort": 0}], "startTimeout": 60, "stopTimeout": 60, "logConfiguration": {"logDriver": "awslogs", "options": {"awslogs-stream-prefix": "ecs", "awslogs-group": "/ecs/z-admin-qa1", "awslogs-region": "ap-northeast-1"}}, "memory": 256, "essential": true, "volumesFrom": []}]

as you can see the image tag didn't change

→ cat _update_container_defs.py.a0w8Hj
from __future__ import absolute_import
import sys
import json


def run(previous_task_definition, container_image_name_updates, container_env_var_updates):
    try:
        definition = json.loads(previous_task_definition)
        container_definitions = definition['taskDefinition']['containerDefinitions']
    except:
        raise Exception('No valid task definition found: ' +
                        previous_task_definition)

    # Build a map of the original container definitions so that the
    # array index positions can be easily looked up
    container_map = {}
    for index, container_definition in enumerate(container_definitions):
        env_var_map = {}
        env_var_definitions = container_definition.get('environment')
        if env_var_definitions is not None:
            for env_var_index, env_var_definition in enumerate(env_var_definitions):
                env_var_map[env_var_definition['name']] = {
                    'index': env_var_index}
        container_map[container_definition['name']] = {
            'image': container_definition['image'], 'index': index, 'environment_map': env_var_map}

    # Expected format: container=...,name=...,value=...,container=...,name=...,value=
    try:
        env_kv_pairs = container_env_var_updates.split(',')
        for index, kv_pair in enumerate(env_kv_pairs):
            kv = kv_pair.split('=')
            key = kv[0].strip()

            if key == 'container':
                container_name = kv[1].strip()
                env_var_name_kv = env_kv_pairs[index+1].split('=')
                env_var_name = env_var_name_kv[1].strip()
                env_var_value_kv = env_kv_pairs[index+2].split('=')
                env_var_value = env_var_value_kv[1].strip()
                if env_var_name_kv[0].strip() != 'name' or env_var_value_kv[0].strip() != 'value':
                    raise ValueError(
                        'Environment variable update parameter format is incorrect: ' + container_env_var_updates)

                container_entry = container_map.get(container_name)
                if container_entry is None:
                    raise ValueError('The container ' + container_name +
                                     ' is not defined in the existing task definition')
                container_index = container_entry['index']
                env_var_entry = container_entry['environment_map'].get(
                    env_var_name)
                if env_var_entry is None:
                    # The existing container definition did not contain environment variables
                    if container_definitions[container_index].get('environment') is None:
                        container_definitions[container_index]['environment'] = []
                    # This env var did not exist in the existing container definition
                    container_definitions[container_index]['environment'].append({'name': env_var_name, 'value': env_var_value})
                else:
                    env_var_index = env_var_entry['index']
                    container_definitions[container_index]['environment'][env_var_index]['value'] = env_var_value
            elif key and key not in ['container', 'name', 'value']:
                raise ValueError(
                    'Incorrect key found in environment variable update parameter: ' + key)
    except ValueError as value_error:
        raise value_error
    except:
        raise Exception(
            'Environment variable update parameter could not be processed; please check parameter value: ' + container_env_var_updates)

    # Expected format: container=...,image-and-tag|image|tag=...,container=...,image-and-tag|image|tag=...,
    try:
        image_kv_pairs = container_image_name_updates.split(',')
        for index, kv_pair in enumerate(image_kv_pairs):
            kv = kv_pair.split('=')
            key = kv[0].strip()
            if key == 'container':
                container_name = kv[1].strip()
                image_kv = image_kv_pairs[index+1].split('=')
                container_entry = container_map.get(container_name)
                if container_entry is None:
                    raise ValueError('The container ' + container_name +
                                     ' is not defined in the existing task definition')
                container_index = container_entry['index']
                image_specifier_type = image_kv[0].strip()
                image_value = image_kv[1].strip()
                if image_specifier_type == 'image-and-tag':
                    container_definitions[container_index]['image'] = image_value
                else:
                    existing_image_name_tokens = container_entry['image'].split(
                        ':')
                    if image_specifier_type == 'image':
                        tag = ''
                        if len(existing_image_name_tokens) == 2:
                            tag = ':' + existing_image_name_tokens[1]
                        container_definitions[container_index]['image'] = image_value + tag
                    elif image_specifier_type == 'tag':
                        container_definitions[container_index]['image'] = existing_image_name_tokens[0] +                             ':' + image_value
                    else:
                        raise ValueError(
                            'Image name update parameter format is incorrect: ' + container_image_name_updates)
            elif key and key not in ['container', 'image', 'image-and-tag', 'tag']:
                raise ValueError(
                    'Incorrect key found in image name update parameter: ' + key)

    except ValueError as value_error:
        raise value_error
    except:
        raise Exception(
            'Image name update parameter could not be processed; please check parameter value: ' + container_image_name_updates)
    return json.dumps(container_definitions)


if __name__ == '__main__':
    try:
        print(run(sys.argv[1], sys.argv[2], sys.argv[3]))
    except Exception as e:
        sys.stderr.write(str(e) + "\n")
        exit(1)

Functionality to unset container env vars?

Do you think it would be appropriate for this orb to have a command to unset environment variables, or perhaps a way to pass an option to the existing container-env-var-updates to remove any env vars not specified in the update clause? Thanks! Big fan of this orb.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.