Giter Club home page Giter Club logo

amazon-genomics-cli's Introduction

⚠️ EOL NOTICE ⚠️

The Amazon Genomics CLI project has entered its End Of Life (EOL) phase. The code is no longer actively maintained and the Github repository will be archived on May 31 2024. During this time, we encourage customers to migrate to AWS HealthOmics to run their genomics workflows on AWS, or reach out to their AWS account team for alternative solutions. While the source code of AGC will still be available after the EOL date, we will not make any updates inclusive of addressing issues or accepting Pull Requests.

⚠️ EOL NOTICE ⚠️

Amazon Genomics CLI

Overview

The Amazon Genomics CLI is a tool to simplify the processes of deploying the AWS infrastructure required to run genomics workflows in the cloud, to submit those workflows to run, and to monitor the logs and outputs of those workflows.

Quick Start

To get an introduction to Amazon Genomics CLI refer to the Quick Start Guide in our wiki.

Further Reading

For full documentation, please refer to our docs.

Releases

All releases can be accessed on our releases page.

The latest nightly build can be accessed here: s3://healthai-public-assets-us-east-1/amazon-genomics-cli/nightly-build/amazon-genomics-cli.zip

Development

To build from source you will need to ensure the following prerequisites are met.

One-time setup

There are a few prerequisites you'll need to install on your machine before you can start developing.

Once you've installed all the dependencies listed here, run make init to install the rest.

Go

The Amazon Genomics CLI is written in Go.

To manage and install Go versions, we use goenv. Follow the installation instructions here.

Once goenv is installed, use it to install the version of Go required by the Amazon Genomics CLI build process, so that it will be available when the build process invokes goenv's go shim:

goenv install

You will need to do this step again whenever the required version of Go is changed.

Node

Amazon Genomics CLI makes use of the AWS CDK to deploy infrastructure into an AWS account. Our CDK code is written in TypeScript. You'll need Node to ensure the appropriate dependencies are installed at build time.

To manage and install Node versions, we use nvm.

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
echo 'export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"' >> ~/.bashrc
echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc
source ~/.bashrc
nvm install

Note: If you are using Zsh, replace ~/.bashrc with ~/.zshrc.

Sed (OSX)

OSX uses an outdated version of sed. If you are on a Mac, you will need to use a newer version of sed to ensure script compatibility.

brew install gnu-sed
echo 'export PATH="$(brew --prefix gnu-sed)/libexec/gnubin:$PATH"' >> ~/.bashrc
source ~/.bashrc

Note: If you are using Zsh, replace ~/.bashrc with ~/.zshrc.

One time setup

Once you've installed all the dependencies listed here, run make init to automatically install all remaining dependencies.

Make

We use make to build, test and deploy artifacts. To build and test issue the make command from the project root.

If you're experiencing build issues, try running go clean --cache in the project root to clean up your local go build cache. Then try to run make init then make again. This should ideally resolve it.

Running Development Code

Option 1. Running with local release

Unlike running 'run-dev.sh' script, this option will build and install a new version of Amazon Genomics CLI, replacing the one installed. To run a release version of Amazon Genomics CLI from your local build, first build your changes and then run make release. This will create a release bundle dist/ at this package root directory. Run the install.sh script in the dist folder to install your local release version of Amazon Genomics CLI. After installing, you should be able to run agc on the terminal.

Option 2. Running with development script

To run against a development version of Amazon Genomics CLI, first build your relevant changes and then run ./scripts/run-dev.sh. This will set the required environment variables and then enter into an Amazon Genomics CLI command shell.

Option 3. Running from development code manually + Custom images

  • Update dependencies and build code with make init && make. At this point the compiled binary will be found at packages/cli/bin/local/agc.
  • Optionally, you may build the install package and install the binary and CDK libraries. make release && (cd dist/amazon-genomics-cli && ./install.sh)
  • Before creating any contexts, ensure you have the relevant environment variables set to point to the ECR repository holding the images of the engines you wish to test. Leave these values unset to test against production images.
export ECR_CROMWELL_ACCOUNT_ID=<some-value>
export ECR_CROMWELL_REGION=<some-value>
export ECR_CROMWELL_TAG=<some-value>
export ECR_CROMWELL_REPOSITORY=<some-value>

export ECR_NEXTFLOW_ACCOUNT_ID=<some-value>
export ECR_NEXTFLOW_REGION=<some-value>
export ECR_NEXTFLOW_TAG=<some-value>
export ECR_NEXTFLOW_REPOSITORY=<some-value>

export ECR_MINIWDL_ACCOUNT_ID=<some-value>
export ECR_MINIWDL_REGION=<some-value>
export ECR_MINIWDL_TAG=<some-value>
export ECR_MINIWDL_REPOSITORY=<some-value>

export ECR_TOIL_ACCOUNT_ID=<some-value>
export ECR_TOIL_REGION=<some-value>
export ECR_TOIL_TAG=<some-value>
export ECR_TOIL_REPOSITORY=<some-value>

These environment variables point to the ECR account, region, repository and tags of the Cromwell, Nextflow, MiniWDL, and Toil engine respectively that will be deployed for your contexts. They are used when you create a context using the corresponding engine types.

The ./scripts/run-dev.sh contains logic to determine the current dev versions of the images which you would typically use. You may also use production images, the current values of which will be written when you activate an account with the production version of Amazon Genomics CLI. If you have customized containers that you want to develop against you can specify these however you will need to make these available if you wish to make pull requests with code that depends on them.

Building locally with CodeBuild

This package is buildable with AWS CodeBuild. You can use the AWS CodeBuild agent to run CodeBuild builds on a local machine.

You only need to set up the build image the first time you run the agent, or when the image has changed. To set up the build image, use the following commands:

git clone https://github.com/aws/aws-codebuild-docker-images.git
cd aws-codebuild-docker-images/ubuntu/standard/5.0
docker build -t aws/codebuild/standard:5.0 .
docker pull amazon/aws-codebuild-local:latest --disable-content-trust=false

Create an environment file (e.g. env.txt) with the appropriate entries depending on which image tags you want to use.

CROMWELL_ECR_TAG=2021-06-17T23-48-54Z
ECR_NEXTFLOW_TAG=2021-06-17T23-48-54Z
WES_ECR_TAG=2021-06-17T23-48-54Z

In the root directory for this package, download and run the CodeBuild build script:

wget https://raw.githubusercontent.com/aws/aws-codebuild-docker-images/master/local_builds/codebuild_build.sh
chmod +x codebuild_build.sh
./codebuild_build.sh -i aws/codebuild/standard:5.0 -a ./output -c -e env.txt

Configuring docker image location

The default values for all variables are placeholders (e.g. 'WES_ECR_TAG_PLACEHOLDER'). It is replaces by the actual value during a build process.

WES adapter for Cromwell

Local environment variables:

  • ECR_WES_ACCOUNT_ID
  • ECR_WES_REGION
  • ECR_WES_TAG

The corresponding AWS Systems Manager Parameter Store property names:

  • /agc/_common/wes/ecr-repo/account
  • /agc/_common/wes/ecr-repo/region
  • /agc/_common/wes/ecr-repo/tag

Cromwell engine

Local environment variables:

  • ECR_CROMWELL_ACCOUNT_ID
  • ECR_CROMWELL_REGION
  • ECR_CROMWELL_TAG

The corresponding AWS Systems Manager Parameter Store property names:

  • /agc/_common/cromwell/ecr-repo/account
  • /agc/_common/cromwell/ecr-repo/region
  • /agc/_common/cromwell/ecr-repo/tag

Nextflow engine

Local environment variables:

  • ECR_NEXTFLOW_ACCOUNT_ID
  • ECR_NEXTFLOW_REGION
  • ECR_NEXTFLOW_TAG

The corresponding AWS Systems Manager Parameter Store property names:

  • /agc/_common/nextflow/ecr-repo/account
  • /agc/_common/nextflow/ecr-repo/region
  • /agc/_common/nextflow/ecr-repo/tag

Contributing

Issues

See Reporting Bugs/Feature Requests for more information. For a list of open bugs and feature requests, please refer to our issues page.

Pull Requests

See Contributing via Pull Requests

Security

See Security Issue Notification for more information.

License

This project is licensed under the Apache-2.0 License.

amazon-genomics-cli's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-genomics-cli's Issues

EBS data volume not mounted for some Nextflow jobs

Some of the jobs submitted with Nextflow do not mount the 200GB data disk, resulting in "No space left on device" error. This happens only with a few random jobs.

Steps to Reproduce

agc workflow run demo-nextflow-project --context spotCtx

Relevant Logs

I get this below error:
|abort: write error ....(No space left on device)

Doing a df -h on the instance shows the 200GB disk (/dev/nvme1n1) not being mounted:

df -h
  | Filesystem Size Used Avail Use% Mounted on
  | overlay 30G 9.4G 20G 33% /
  | tmpfs 64M 0 64M 0% /dev
  | tmpfs 5.0G 0 5.0G 0% /sys/fs/cgroup
  | /dev/nvme0n1p1 30G 9.4G 20G 33% /opt/aws-cli
  | shm 64M 0 64M 0% /dev/shm
  | tmpfs 5.0G 0 5.0G 0% /proc/acpi
  | tmpfs 5.0G 0 5.0G 0% /sys/firmware

Expected Behavior
Mount the 200GB disk (/dev/nvme1n1 ) on /

df -h
| Filesystem Size Used Avail Use% Mounted on
| /dev/nvme1n1 200G 12G 188G 6% /
| tmpfs 64M 0 64M 0% /dev
| tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
| /dev/nvme0n1p1 30G 6.4G 23G 22% /opt/aws-cli
| /dev/nvme1n1 200G 12G 188G 6% /etc/hosts
| shm 64M 0 64M 0% /dev/shm
| tmpfs 3.8G 0 3.8G 0% /proc/acpi
| tmpfs 3.8G 0 3.8G 0% /sys/firmware

Actual Behavior

See above expected behavior

Operating System:
AGC Version: agc version: 1.0.0-41-gc5ac696
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Change the Nextflow head node instance size and type

Use Case

The Nextflow head node is currently run on c5.large instance. The head node sometimes crashes due to memory issues or other reasons.

Proposed Solution

Ability to change the instance size and type of the nextflow head node

Improved logs UX

Description

Design an improved logging UX to handle the use cases below.

Use Case

  • As an AGC user, I would like the log follow flag to behave more like *nix tail -f so that it shows 10 lines of previous context and then begins following new log messages. If no logs have yet been produced for the logstream AGC should be smart enough to connect to them when they start.
  • As a AGC user I would like to get the log for the currently running task or if there are multiple tasks running in parallel I would like to be able to choose/ specify the task log of interest.
  • As an AGC user I would like to be able to see a list of currently running tasks as well as completed tasks from a workflow.
  • As an AGC user running a workflow where the engine runs as a Batch job (e.g Nextflow), I need to be able to select the relevant job to get the engine logs for a workflow.

Proposed Solution

Produce a design for approval and generate new issues to implement that design.

Other information

Unable to supply an array of files as input via `--args`

Describe the Bug

I have an absolute minimal WDL example (attached).
It works fine if I specify the input in the MANIFEST.json. It fails if I specify the input via --args.

I think the problem occurs when an input file is of array type.

Steps to Reproduce

$ agc --version
agc version: 1.0.0-41-gc5ac696
$ agc workflow run minimal --context ctx1 --args workflows/minimal/test.aws.inputs.json
2021-10-20T18:45:38-04:00 𝒊  Running workflow. Workflow name: 'minimal', Arguments: 'workflows/minimal/test.aws.inputs.json', Context: 'ctx1'
2021-10-20T18:45:39-04:00 ✘   error="unable to run workflow: json: cannot unmarshal array into Go value of type string"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:minimal Arguments:workflows/minimal/test.aws.inputs.json ContextName:ctx1}
caused by: unable to run workflow: json: cannot unmarshal array into Go value of type string

Relevant Logs

$ agc workflow run minimal --context ctx1 --args workflows/minimal/test.aws.inputs.json
2021-10-20T18:45:38-04:00 𝒊  Running workflow. Workflow name: 'minimal', Arguments: 'workflows/minimal/test.aws.inputs.json', Context: 'ctx1'
2021-10-20T18:45:39-04:00 ✘   error="unable to run workflow: json: cannot unmarshal array into Go value of type string"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:minimal Arguments:workflows/minimal/test.aws.inputs.json ContextName:ctx1}
caused by: unable to run workflow: json: cannot unmarshal array into Go value of type string

Expected Behavior

Should run fine.

Actual Behavior

Threw an exception immediately and terminated with a non-zero exit code.

Additional Context

Operating System: macOS 10.14.6 (Mojave)
AGC Version: 1.0.0-41-gc5ac696
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

minimal.zip

Workflows Failure

Describe the Bug

Workflows are failing with EXECUTOR_ERROR

Steps to Reproduce

  • Launch all contexts in all the examples projects folder
  • In each example project, run the following command
ctx=onDemandCtx
for w in `agc workflow list | cut -f 2`; do r=`agc workflow run $w -c $ctx`; echo "$w: $r"; done
  • Run agc context deploy --all

Screenshots

image
image (1)

Additional Context

Operating System:
AGC Version: v1.1.1
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

miniwdl engine needs to be able to terminate running and queued tasks when a failure happens

Describe the Bug

When a task fails the miniwdl head task will attempt to terminate all currently running tasks from the workflow.

Steps to Reproduce

Run a workflow with a scatter where one of the tasks returns with a non-zero return code.

Relevant Logs

example line

2021-12-02 18:57:46.475 wdl.w:BasicJointGenotyping3000SamplesChrOne.t:call-ImportGVCFs-1433 ERROR AccessDeniedException, User: arn:aws:sts::123456789012:assumed-role/Agc-Context-ScaledJointGe-miniwdlHeadBatchBatchRol-IO0I9REMOUDL/1040aebb4a584db1be4383b79cfa1cd8 is not authorized to perform: batch:TerminateJob on resource: arn:aws:batch:us-west-2: 123456789012:job/9bc7488f-e2ae-4d59-b6a5-37bdfa8be6d8

Expected Behavior

Head job's IAM policy should allow it to terminate any currently running tasks or queued tasks from the workflow that the head task is coordinating.

Ideally the policy should limit to ONLY being able to cancel/ terminate tasks being created by the head task.

Actual Behavior

  • AccessDeniedException as see above
  • Inspection of the IAM policy shows that it doesn't have the required permission

Additional Context

Operating System:
AGC Version: 1.1.2
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

docker ulimits

Question:
Some of our nextflow pipelines end up hitting nofile ulimit issues within docker. In order to avoid this when utilizing the former aws templates we would add the following to the provision.sh script.
# common provisioning for all workflow orchestrators cd /opt sh $BASEDIR/ecs-additions-common.sh sed -i 's/nofile=.*/nofile=65536:65536"/' /etc/sysconfig/docker

can this be set in a context?
Related issue here
https://github.com/aws-samples/aws-genomics-workflows/issues/150

Tab completion of context name doesn't work unless using -c flag

Describe the Bug

agc context deploy -c myC[tab] will auto complete a context name

agc context deploy myC[tab] will not.

Expected Behavior

Both variants of the command should tab complete

Additional Context

Operating System: Mac OSX
AGC Version: 1.1.1
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

version upgrade doesn't take effect

I have upgraded agc to the latest version released v1.1.0.

However, when I check for the version it still shows the old version:

$ agc --version
agc version: 1.0.1-1-gfd7b94a

Moreover, when I open a new Terminal window, I get the notification:

$ 2021-11-16T12:35:19+08:00 𝒊 New version of agc available. Current version is '1.0.1'. The latest version is '1.1.0'

Steps to Reproduce

wget https://github.com/aws/amazon-genomics-cli/releases/download/v1.1.0/amazon-genomics-cli-1.1.0.zip
unzip amazon-genomics-cli-1.1.0.zip
./amazon-genomics-cli-1.1.0.zip/install.sh

agc --version

Operating System: macOS Monterey 12.0.1
AGC Version: 1.1.0
Was AGC setup with a custom bucket: no
Was AGC setup with a custom VPC: no

Add JSON as a CLI output format

Description

Add JSON as a output format for CLI commands

Use Case

JSON is more machine readable and amenable to tying use of AGC with other automated processes. AGC currently supports text as an output format for CLI commands. While this can be processed with standard bash tools like cut, sed, and awk, it is potentially fragile for automation.

Proposed Solution

Add a JSON formatter as one of the set of available formatters that would enable commands like:

agc workflow status --format json

Allow JSON to be the default output format via configuration:

agc configure format json

Other information

Getting error `/var/scratch/fetch_and_run.sh: Is a directory` soon after launch of workflow

Starting the workflow

Running the workflow as follows:

$ agc workflow run read --args workflow_inputs/read.inputs.json --context ctx1

Which yields

2021-09-29T15:53:37+10:00 𝒊   Running workflow. Workflow name: 'read', Arguments: 'workflow_inputs/read.inputs.json', Context: 'ctx1'
47c28bec-58b1-4aaf-abab-b7a011944e5b

Checking the status

Checking the status with

$ agc workflow status --context-name ctx1 --workflow-name read --run-id 47c28bec-58b1-4aaf-abab-b7a011944e5b --verbose

yields

2021-09-29T16:14:46+10:00 𝒊   Showing workflow run(s). Max Runs: 20
2021-09-29T16:14:46+10:00 ↓  EstablishWesConnection(https://ghe0nhjr4c.execute-api.ap-southeast-2.amazonaws.com/prod/ga4gh/wes/v1)
2021-09-29T16:14:46+10:00 ↓  Ping WES endpoint until we get an answer:
2021-09-29T16:14:46+10:00 ↓  Attempt 1 of 3
2021-09-29T16:14:46+10:00 ↓  Connected to WES endpoint
WORKFLOWINSTANCE        ctx1    47c28bec-58b1-4aaf-abab-b7a011944e5b    true    EXECUTOR_ERROR  2021-09-29T05:53:39Z    read

Checking the logs

$ agc logs workflow read --run 47c28bec-58b1-4aaf-abab-b7a011944e5b --verbose

which yields

2021-09-29T16:18:49+10:00 𝒊   Showing the logs for 'read'
2021-09-29T16:18:49+10:00 ↓  Showing logs for workflow run '47c28bec-58b1-4aaf-abab-b7a011944e5b'
2021-09-29T16:18:49+10:00 ↓  EstablishWesConnection(https://ghe0nhjr4c.execute-api.ap-southeast-2.amazonaws.com/prod/ga4gh/wes/v1)
2021-09-29T16:18:49+10:00 ↓  Ping WES endpoint until we get an answer:
2021-09-29T16:18:49+10:00 ↓  Attempt 1 of 3
2021-09-29T16:18:49+10:00 ↓  Connected to WES endpoint
Wed, 29 Sep 2021 15:55:42 +1000 /bin/bash: /var/scratch/fetch_and_run.sh: Is a directory

I've found the following related issues:

add a global --silent flag

Description

Provide a global --silent flag that suppresses all output to stderr and stdout generated before the primary command output.

Use Case

AGC has a --verbose flag but not a symmetric --silent flag. When writing automation scripts that use AGC, the "info" level outputs from AGC add unnecessary noise to logs that would be collected.

Proposed Solution

Examples without --silent:

deploy context

$ agc context deploy spotCtx
2021-12-10T23:34:33Z �  New version of agc available. Current version is '1.1.1'. The latest version is '1.1.2'
2021-12-10T23:34:33Z �  Deploying context(s)
Deploying resources for context 'spotCtx'... [____________________________________________________________________________________]6.6s

workflow status

$ agc workflow status
2021-12-10T23:32:46Z �  Showing workflow run(s). Max Runs: 20
WORKFLOWINSTANCE        spotCtx 08307c79-4646-484f-8cc1-dd9bd62981cc    true    EXECUTOR_ERROR  2021-12-01T21:11:06Z    words-with-vowels
WORKFLOWINSTANCE        spotCtx 4aece5da-a55a-4a50-a2ea-38af75253ab6    true    COMPLETE        2021-12-01T21:11:04Z    read
WORKFLOWINSTANCE        spotCtx 535b65db-1a96-44da-8001-6cf441be872d    true    COMPLETE        2021-12-01T21:11:03Z    hello
WORKFLOWINSTANCE        spotCtx 813d5afb-f036-4a34-ab3a-753333f7e539    true    COMPLETE        2021-12-01T21:11:01Z    haplotype
WORKFLOWINSTANCE        myContext       f32d5908-7b3b-4e45-852a-0624ce836a39    true    EXECUTOR_ERROR  2021-12-01T21:10:55Z    words-w

Examples With --silent:

deploy context

$ agc context deploy spotCtx --silent
# console blocks until deployment completes
# does not show progress indication
DETAIL  Agc-Context-Demo-pwymingJKP3z-spotCtx-cromwellNestedStackcromwellNestedStackResourceF3-1PZMXWV3W3HRM-ApiProxyAccessLogGroup0E050698-LU7oiUdu3HLi       s3://agc-733263974272-us-east-2/project/Demo/userid/pwymingJKP3z/context/spotCtx        Agc-Context-Demo-pwymingJKP3z-spotCtx-cromwellNestedStackcromwellNestedStackResourceF3-1PZMXWV3W3HRM-EngineLogGroup893F148F-4EP5SB3KqBW8       STARTED       /aws/lambda/Agc-Context-Demo-pwymingJ-CromwellWesAdapterLambda-dxSjjgl3ApuI      https://d1jhugndd0.execute-api.us-east-2.amazonaws.com/prod/
SUMMARY true    256     spotCtx

workflow status

$ agc workflow status --silent
WORKFLOWINSTANCE        spotCtx 08307c79-4646-484f-8cc1-dd9bd62981cc    true    EXECUTOR_ERROR  2021-12-01T21:11:06Z    words-with-vowels
WORKFLOWINSTANCE        spotCtx 4aece5da-a55a-4a50-a2ea-38af75253ab6    true    COMPLETE        2021-12-01T21:11:04Z    read
WORKFLOWINSTANCE        spotCtx 535b65db-1a96-44da-8001-6cf441be872d    true    COMPLETE        2021-12-01T21:11:03Z    hello
WORKFLOWINSTANCE        spotCtx 813d5afb-f036-4a34-ab3a-753333f7e539    true    COMPLETE        2021-12-01T21:11:01Z    haplotype
WORKFLOWINSTANCE        myContext       f32d5908-7b3b-4e45-852a-0624ce836a39    true    EXECUTOR_ERROR  2021-12-01T21:10:55Z    words-w

Other information

Error running nextflow workflow with main.nf in workflow directory

Defined a custom nextflow workflow under workflows

workflows
sentieon
├── MANIFEST.json
├── inputs.json
├── main.nf
└── nextflow.config

In the MANIFEST.json, I had the following:

{
  "mainWorkflowURL": "./project/main.nf",
  "inputFileURLs": [
    "inputs.json"
  ],
  "engineOptions": "-resume"
}

I got an error on running the workflow (see snippet of log file below):

Archive:  ./workflow.zip
  inflating: MANIFEST.json           
  inflating: inputs.json             
  inflating: main.nf                 
  inflating: nextflow.config         
total 0
="-rw-r--r-- 1 root root  109 Dec 31  1979 MANIFEST.json"
="-rw-r--r-- 1 root root  562 Dec 31  1979 inputs.json"
="-rw-r--r-- 1 root root 2780 Dec 31  1979 main.nf"
="-rw-r--r-- 1 root root  120 Dec 31  1979 nextflow.config"
="-rw-r--r-- 1 root root 1839 Oct  1 15:38 workflow.zip"
cat ./project/MANIFEST.json
{
"  ""mainWorkflowURL"": ""main.nf"","
"  ""inputFileURLs"": ["
"    ""inputs.json"""
"  ],"
"  ""engineOptions"": ""-resume"""
}
cat ./project/inputs.json
{
"  ""fasta_ref"": ""s3://genomics-reference/reference/hg38/Homo_sapiens_assembly38.fasta"","
"  ""ref_mills"": ""s3://genomics-reference/reference/hg38/Mills_and_1000G_gold_standard.indels.hg38.vcf.gz"","
"  ""ref_dbsnp"": ""s3://genomics-reference/reference/hg38/Homo_sapiens_assembly38.dbsnp138.vcf"","
"  ""r1"": ""s3://genomics-benchmark-datasets/google-brain/fastq/novaseq/wgs_pcr_free/30x/HG001.novaseq.pcr-free.30x.R1.fastq.gz"","
"  ""r2"": ""s3://genomics-benchmark-datasets/google-brain/fastq/novaseq/wgs_pcr_free/30x/HG001.novaseq.pcr-free.30x.R2.fastq.gz"""
"  ""skip_qc"": true"
}
="== Running Workflow =="
nextflow run main.nf -resume -params-file ./project/inputs.json
nextflow pid: 55
[1]+  Running                 nextflow run $NEXTFLOW_PROJECT $NEXTFLOW_PARAMS &
waiting ..
N E X T F L O W  ~  version 21.04.3
Not a valid project name: main.nf
="=== Running Cleanup ==="
="=== Nextflow Log ==="
Oct-01 15:41:58.815 [main] DEBUG nextflow.cli.Launcher - $> nextflow run main.nf -resume -params-file ./project/inputs.json
Oct-01 15:41:58.970 [main] INFO  nextflow.cli.CmdRun - N E X T F L O W  ~  version 21.04.3
Oct-01 15:41:59.029 [main] DEBUG nextflow.cli.Launcher - Operation aborted
nextflow.exception.AbortOperationException: Not a valid project name: main.nf
    at nextflow.scm.AssetManager.resolveName(AssetManager.groovy:261)
    at nextflow.scm.AssetManager.build(AssetManager.groovy:131)
    at nextflow.scm.AssetManager.<init>(AssetManager.groovy:109)
    at nextflow.cli.CmdRun.getScriptFile(CmdRun.groovy:360)
    at nextflow.cli.CmdRun.run(CmdRun.groovy:265)
    at nextflow.cli.Launcher.run(Launcher.groovy:475)
    at nextflow.cli.Launcher.main(Launcher.groovy:657)
="== Preserving Session Cache =="
="== Preserving Session Log =="
upload: ./.nextflow.log to s3://agc-189679940053-us-east-1/project/NextflowDemo/userid/srsujaya4dXQX4/context/spotContext/nextflow-execution/logs/.nextflow.log.e1d49842-184a-4b01-ba56-0b47b4153474.1
="=== Bye! ==="

When I ran the workflow with "project/main.nf" as the mainWorkflowURL in the MANIFEST.json, it worked properly. This seems to be a bug with an assumption on the path.

Cross region replication of AGC ECR

Description

//: # Some Customers in global regions outside of US have restrictions on pulling containers from outside regions (e.g. US-east-1). Need to provide a solution so that they can use AGC.

Use Case

So Customers in global regions (e.g. China, Singapore, etc) can use AGC in cases where they are not permitted to pull assets from outside of their home region.

Proposed Solution

https://aws.amazon.com/blogs/containers/cross-region-replication-in-amazon-ecr-has-landed/

Other information

Incorporate new instance types

Description

Add new X-86 types to the default compute environments (M6a, M6i, C6i etc)

Use Case

Potential cost effectiveness improvements

Proposed Solution

Update CDK templates and update region exclusion lists so they reflect latest regional availabilities.

Cannot find module 'monocdk/lib/aws-logs/lib/log-group' or its corresponding type declarations

Describe the Bug

When I run agc account activate, I get:

2021-11-29T14:44:14+11:00 ✘  TSError: ⨯ Unable to compile TypeScript:
2021-11-29T14:44:14+11:00 ✘  ../../lib/util/index.ts(12,27): error TS2307: Cannot find module 'monocdk/lib/aws-logs/lib/log-group' or its corresponding type declarations.

(see full log below)

Steps to Reproduce

  • I installed an old version of the AGC, possibly 1.0
  • I then re-ran the installer for the AGC 1.1.1
  • Finally, I ran agc account activate as above

Relevant Logs

2021-11-29T14:44:00+11:00 𝒊  Activating AGC with bucket '' and VPC ''
Activating account... [_______________________________________________________________________________________________________________________________________________________________________________________]9.0s2021-11-29T14:44:14+11:00 ✘  /home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:750
2021-11-29T14:44:14+11:00 ✘      return new TSError(diagnosticText, diagnosticCodes);
2021-11-29T14:44:14+11:00 ✘             ^
2021-11-29T14:44:14+11:00 ✘  TSError: ⨯ Unable to compile TypeScript:
2021-11-29T14:44:14+11:00 ✘  ../../lib/util/index.ts(12,27): error TS2307: Cannot find module 'monocdk/lib/aws-logs/lib/log-group' or its corresponding type declarations.
2021-11-29T14:44:14+11:00 ✘  
2021-11-29T14:44:14+11:00 ✘      at createTSError (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:750:12)
2021-11-29T14:44:14+11:00 ✘      at reportTSError (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:754:19)
2021-11-29T14:44:14+11:00 ✘      at getOutput (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:941:36)
2021-11-29T14:44:14+11:00 ✘      at Object.compile (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1243:30)
2021-11-29T14:44:14+11:00 ✘      at Module.m._compile (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1370:30)
2021-11-29T14:44:14+11:00 ✘      at Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
2021-11-29T14:44:14+11:00 ✘      at Object.require.extensions.<computed> [as .ts] (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1374:12)
2021-11-29T14:44:14+11:00 ✘      at Module.load (node:internal/modules/cjs/loader:981:32)
2021-11-29T14:44:14+11:00 ✘      at Function.Module._load (node:internal/modules/cjs/loader:822:12)
2021-11-29T14:44:14+11:00 ✘      at Module.require (node:internal/modules/cjs/loader:1005:19) {
2021-11-29T14:44:14+11:00 ✘    diagnosticText: "../../lib/util/index.ts(12,27): error TS2307: Cannot find module 'monocdk/lib/aws-logs/lib/log-group' or its corresponding type declarations.\n",
2021-11-29T14:44:14+11:00 ✘    diagnosticCodes: [ 2307 ]
2021-11-29T14:44:14+11:00 ✘  }
2021-11-29T14:44:14+11:00 ✘  Subprocess exited with error 1
2021-11-29T14:44:14+11:00 ✘   error="exit status 1"
Error: an error occurred invoking 'account activate'
with variables: {bucketName: vpcId:}
caused by: exit status 1

Operating System:
AGC Version: 1.1.1
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Unable to provide non-File workflow inputs using `--args`

Describe the Bug

I am not able to run a simple hello world WDL (attached) whose only input is of type String when using the --args option.
When the same input.json file is provided in the inputFileURLs section of the MANIFEST.json, the workflow is able to run successfully.

When using --args, agc seems to treat every workflow input as type File and check if they exist locally. If they do not exist (as is the case with String-typed inputs), agc exits with an error rather than submitting the workflow.

This seems related to #109.

Steps to Reproduce

agc context deploy --context ctx1
agc workflow run hello_world --context ctx1 --args workflows/hello_world.inputs.json

Relevant Logs

2021-10-22T16:33:16-04:00 𝒊  Running workflow. Workflow name: 'hello_world', Arguments: 'workflows/hello_world.inputs.json', Context: 'ctx1'
2021-10-22T16:33:17-04:00 ✘   error="unable to run workflow: stat /home/heather/agc_string_error/workflows/agc team: no such file or directory"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:hello_world Arguments:workflows/hello_world.inputs.json ContextName:ctx1}
caused by: unable to run workflow: stat /home/heather/agc_string_error/workflows/agc team: no such file or directory

Expected Behavior

I would expect to be able to provide the same workflow inputs file I use in the MANIFEST.json under inputFileURLs as an argument to --args. Non-file type inputs should not be inferred to be type file (and agc should not check that they exist locally as a prerequisite to workflow submission).

Actual Behavior

I am not able to start a workflow that has a String-typed input using the --args command, since agc seems to treat all inputs as files and check if they exist locally, failing to submit a workflow when it fails to find "files" (that are actually supposed to be arbitrary strings).

Additional Context

Operating System: Debian GNU/Linux 10 (buster)
AGC Version: agc version: 1.0.0-41-gc5ac696
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: Yes (Reused the VPC I set up for Cromwell batch)

Files to reproduce: agc_string_error.zip

`agc workflow run` always zips everything

Describe the Bug

agc workflow run will zip whatever is at the sourceURL of the workflow in question, even if it is:

  • A single file ie main.nf
  • Already a zip file, ie bundle.zip, which contradicts the documentation

Steps to Reproduce

  • Have the following agc-project.yml:
name: SomeWorkflows
schemaVersion: 1
workflows:
  somePipe:
    type:
      language: nextflow
      version: 1.0
    sourceURL: bundle.zip
contexts:
  SomeContext:
    instanceTypes:
      - "c6g.medium"
    engines:
      - type: nextflow
        engine: nextflow
  • agc workflow run somePipe --context SomeContext
  • Grab the S3 URL of the archive from the AWS Batch console Job page
  • Download the file from S3
  • Unzip it
  • You will notice that inside workflow.zip is bundle.zip, which is not correct behaviour

Relevant Logs

n/a

Expected Behavior

workflow.zip should be bundle.zip, not contain it. So for example main.nf should be a file in both.

Actual Behavior

workflow.zip contains bundle.zip

Screenshots

n/a

Additional Context

Operating System:
AGC Version: 1.1.2
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

miniwdl contexts writing results to root of "agc" bucket

Describe the Bug

Workflow results for contexts that use miniWDL are being written to the root of the S3 bucket configured during agc account activate (s3://<agc-bucket-name>) instead of in s3://<agc-bucket-name>/project/<project-name>/userid/<user-id>/context/<context-name>.

Steps to Reproduce

  1. Use the examples/demo-wdl-project
  2. Start the miniContext
  3. Run the hello workflow and wait until it completes
  4. Look in the "agc" bucket

Relevant Logs

Expected Behavior

Outputs that workflows generate should be written to an appropriately namespaced location - i.e.:

s3://<agc-bucket-name>/project/<project-name>/userid/<user-id>/context/<context-name>/miniwdl-execution/

Actual Behavior

There is no miniwdl execution folder where it is expected:

$ aws s3 ls agc-111122223333-us-east-2/project/Demo/userid/userJKP3z/context/miniContext/
                           PRE workflow/

Instead, it is here:

$ aws s3 ls agc-111122223333-us-east-2/
                           PRE miniwdl/
                           PRE project/
                           PRE scripts/

Screenshots

Additional Context

Operating System: Amazon Linux 2
AGC Version: 1.1.0 (nominal), 1.0.1-1-gfd7b94a (reported by --version)
Was AGC setup with a custom bucket: no
Was AGC setup with a custom VPC: no

Doesn't work with accounts using MFA

Describe the Bug

I can't activate AGC on my AWS account because the tool can't work with the MFA assume role setup that we use for access control at the UCSC Genomics Institute.

Steps to Reproduce

  1. Set up your AWS accounts. Have one AWS account where all the users are, and get an access key and secret key for your user, and put them in ~/.aws/credentials. Have another AWS account that is the one you actually want to run AGC in. Make a role there to be assumed, and set up the IAM permissions so that users in the first account can assume the role in the second account when authenticating with MFA. Get the ARN for the role.
  2. Set up an MFA device on your IAM user in the first account and get its serial.
  3. Write a ~/.aws/config file that looks something like this, defining a profile that assumes the role in the destination account:
[default]
region = us-west-2

[profile toil]
region = us-west-2
role_arn = PASTE_ARN_HERE
source_profile = default
mfa_serial = PASTE_SERIAL_HERE
  1. Try and set up AGC using the profile you defined:
agc account -v -p toil activate

Relevant Logs

$ agc account -v -p toil activate
2021-10-27T09:48:17-07:00 ☠️   error="assume role with MFA enabled, but AssumeRoleTokenProvider session option not set."

Expected Behavior

The activate command should succeed, or else complain that the role doesn't have the permissions it needs to do the setup.

Actual Behavior

Instead, AGC can't even assume the role; it looks like the piece of the Go SDK that is responsible for interacting with the keyboard and prompting for an MFA code is not set up. Interestingly, the exact same thing happens even when I have an active cached MFA session for the aws Python-based tool, so AGC isn't even capable of finding and using the cached assumed-role credentials in ~/.aws/cli/cache/.

Screenshots

Not applicable; everything is text.

Additional Context

Operating System: Linux courtyard 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 GNU/Linuxa
AGC Version: 1.0.0-41-gc5ac696
Was AGC setup with a custom bucket: Not set up yet
Was AGC setup with a custom VPC: Not set up yet

500 Internal Server Error

Running the tutorial wdl hello raises a 500 Internal Server Error
This is quite uniformative for someone trying out the tool, I tried it both on a mac and from a sagemaker instance.
In both cases runnning on an already existing vpc. workflow and context are defined

agc workflow run hello --context myContext
2021-09-29T15:38:49+02:00 𝒊 Running workflow. Workflow name: 'hello', Arguments: '', Context: 'myContext'
2021-09-29T15:38:51+02:00 ✘ error="unable to run workflow: 500 Internal Server Error"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:hello Arguments: ContextName:myContext}
caused by: unable to run workflow: 500 Internal Server Error
suggestion: check that your workflow and context are defined for this project

`exec user process caused: exec format error` with `agc logs engine`

Describe the Bug

I'm getting standard_init_linux.go:228: exec user process caused: exec format error

Steps to Reproduce

  • Run a workflow using agc workflow run some_flow --context some_context
  • agc logs engine --context some_context

Relevant Logs

2021-12-07T14:06:06+11:00 𝒊  Showing engine logs for 'T2Context'
Tue, 07 Dec 2021 13:57:40 +1100 standard_init_linux.go:228: exec user process caused: exec format error

Expected Behavior

The logs to be shown.

Actual Behavior

The logs are not shown; instead I get this error.

Operating System:
AGC Version: 1.1.2
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Installation docs out of date - 'latest' release does not exist, zip structure has changed

From the docs page here

curl -OLs https://github.com/aws/amazon-genomics-cli/releases/latest/download/amazon-genomics-cli.zip
unzip amazon-genomics-cli.zip -d agc
./agc/install.sh

Missing release

Running

curl -OLs https://github.com/aws/amazon-genomics-cli/releases/latest/download/amazon-genomics-cli.zip
$ cat amazon-genomics-cli.zip

Yields:

Not Found%     

These probably aren't the parameters I would use for curl either.

Using --fail or -f would at least return a non-zero exit code if the url did not exist.

When using --silent or -s, one needs to be very confident that the url exists or that errors are handled correctly. In this case, even with -f the user is none-the-wiser that the url does not exist.

I would instead recommend using

curl -OLf https://github.com/aws/amazon-genomics-cli/releases/latest/download/amazon-genomics-cli.zip

which yields

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   157  100   157    0     0    457      0 --:--:-- --:--:-- --:--:--   456
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (22) The requested URL returned error: 404 Not Found

Zip restructure

Step 1: Download the latest release

curl -OLs https://github.com/aws/amazon-genomics-cli/releases/download/v1.1.0/amazon-genomics-cli-1.1.0.zip

Step 2: Unzip into the agc directory

unzip amazon-genomics-cli.zip -d agc

Step 3: Look in the directory

% ls agc         

Yields:

amazon-genomics-cli

./agc/install.sh does not exist!

With the latest release we can instead do:

unzip amazon-genomics-cli.zip
bash amazon-genomics-cli/install.sh 

EC2 instances generated by AGC don't have descriptive tags

Hello, I've noticed that the tags (and name) of ec2 resources provisioned by AGC don't have descriptive tags, making it difficult to keep track of resources that have been spun up from AGC.

As an example:

$ aws ec2 describe-tags --filters "Name=resource-id,Values=i-067cdd56997ed1d0f"
{
    "Tags": [
        {
            "Key": "aws:autoscaling:groupName",
            "ResourceId": "i-067cdd56997ed1d0f",
            "ResourceType": "instance",
            "Value": "TaskBatchComputeEnviron-b509e63dc5ffd5b-asg-32d3f092-50e7-332e-84a4-f28e5a3f894a"
        },
        {
            "Key": "aws:ec2launchtemplate:id",
            "ResourceId": "i-067cdd56997ed1d0f",
            "ResourceType": "instance",
            "Value": "lt-05ebe391321e46034"
        },
        {
            "Key": "aws:ec2launchtemplate:version",
            "ResourceId": "i-067cdd56997ed1d0f",
            "ResourceType": "instance",
            "Value": "1"
        }
    ]
}

We ideally would like to see a name and stack tag for our agc instances.

$ aws ec2 describe-instances --instance-ids i-067cdd56997ed1d0f
{
    "Reservations": [
        {
            "Groups": [],
            "Instances": [
                {
                    "AmiLaunchIndex": 0,
                    "ImageId": "ami-0310d0e01c1e033c0",
                    "InstanceId": "i-067cdd56997ed1d0f",
                    "InstanceType": "m5.large",
                    "LaunchTime": "2021-09-29T23:30:37+00:00",
                    "Monitoring": {
                        "State": "disabled"
                    },
                    "Placement": {
                        "AvailabilityZone": "ap-southeast-2b",
                        "GroupName": "",
                        "Tenancy": "default"
                    },
                    "PrivateDnsName": "",
                    "ProductCodes": [],
                    "PublicDnsName": "",
                    "State": {
                        "Code": 48,
                        "Name": "terminated"
                    },
                    "StateTransitionReason": "User initiated (2021-09-29 23:34:15 GMT)",
                    "Architecture": "x86_64",
                    "BlockDeviceMappings": [],
                    "ClientToken": "b605f0cf-950d-acc1-6993-44386e49162c",
                    "EbsOptimized": false,
                    "EnaSupport": true,
                    "Hypervisor": "xen",
                    "NetworkInterfaces": [],
                    "RootDeviceName": "/dev/xvda",
                    "RootDeviceType": "ebs",
                    "SecurityGroups": [],
                    "StateReason": {
                        "Code": "Client.UserInitiatedShutdown",
                        "Message": "Client.UserInitiatedShutdown: User initiated shutdown"
                    },
                    "Tags": [
                        {
                            "Key": "aws:ec2launchtemplate:id",
                            "Value": "lt-05ebe391321e46034"
                        },
                        {
                            "Key": "aws:autoscaling:groupName",
                            "Value": "TaskBatchComputeEnviron-b509e63dc5ffd5b-asg-32d3f092-50e7-332e-84a4-f28e5a3f894a"
                        },
                        {
                            "Key": "aws:ec2launchtemplate:version",
                            "Value": "1"
                        }
                    ],
                    "VirtualizationType": "hvm",
                    "CpuOptions": {
                        "CoreCount": 1,
                        "ThreadsPerCore": 2
                    },
                    "CapacityReservationSpecification": {
                        "CapacityReservationPreference": "open"
                    },
                    "HibernationOptions": {
                        "Configured": false
                    },
                    "MetadataOptions": {
                        "State": "pending",
                        "HttpTokens": "optional",
                        "HttpPutResponseHopLimit": 1,
                        "HttpEndpoint": "enabled"
                    },
                    "EnclaveOptions": {
                        "Enabled": false
                    }
                }
            ],
            "OwnerId": "843407916570",
            "RequesterId": "081202882002",
            "ReservationId": "r-019f44509c65aeb4f"
        }
    ]
}

1.1.0 version still states version 1.0.1

Steps to reproduce

Step 1: Install the latest version

curl -OLs https://github.com/aws/amazon-genomics-cli/releases/download/v1.1.0/amazon-genomics-cli-1.1.0.zip
unzip amazon-genomics-cli-1.1.0.zip
bash amazon-genomics-cli/install.sh 

Step 2: Get agc version

% $HOME/bin/agc --version

Yields

agc version: 1.0.1-1-gfd7b94a

Running commands such as:

agc context list  

then comes up with stderr lines such as

2021-11-16T15:41:59+11:00 𝒊  New version of agc available. Current version is '1.0.1'. The latest version is '1.1.0'

Possible discussion about amazon-genomics-cli with Brazilian bioinformatics team using also AWS + Cromwell

[//]: # Discussion about Amazon Genomics Cli (We are also using cromwell + aws)

Description
Hello guys we are studying your study case using amazon-genomics cli, we are a bioinformatics team from Brazil , working the Hospital Israelita Albert Einstein and we saw several issues that we are also facing using AWS + AWS BATCH + WDL + Cromwell. We would like to discuss guys if it's possible to exchange some discussions and ideas. Would it be possible ? We are from the https://github.com/varstation/ (our bioinformatics team from hospital). Let us know..

Congratulations at your work, it is a huge challenge integrating all these tools with AWS in a smooth and scalable way.

By the way my name is Marcel Caraciolo.

Files in inputs json are relative to json file rather than relative to $PWD

Hello, one change I've noticed from agc release 0.81 to 1.0.0 has been that input files are now relative to the input json rather than relative to the current working directory.

Project Overview

Project yaml

For example, in the directory myproj I have the following agc-project.yaml:

Click to expand!
name: myproj
schemaVersion: 1

workflows:
  read:
    type:
      language: wdl
      version: 1.0
    sourceURL: ../../workflows/read.wdl

contexts:
    ctx1:
        engines:
            - type: wdl
              engine: cromwell

With the following directory tree:

.
├── agc-project.yaml
├── startup.log
├── workflow_inputs
│   ├── data.txt
│   └── read.inputs.json

And ../../workflows/read.wdl is:

Click to expand!
version 1.0
workflow ReadFile {
    input {
        File input_file
    }
    call read_file { input: input_file = input_file }
}

task read_file {
    input {
        File input_file
    }
    String content = read_string(input_file)

    command {
        echo '~{content}'
    }
    runtime {
        docker: "ubuntu:latest"
        memory: "4G"
    }

    output { String out = read_string( stdout() ) }
}

Inputs Overview

The contents of workflow_inputs/read.inputs.json are as follows:

{
    "ReadFile.input_file": "workflow_inputs/data.txt"
}

Workflow run command

Running the agc workflow command:

$ agc workflow run read --args workflow_inputs/read.inputs.json --context ctx1

yields

2021-09-29T15:53:06+10:00 𝒊   Running workflow. Workflow name: 'read', Arguments: 'workflow_inputs/read.inputs.json', Context: 'ctx1'
2021-09-29T15:53:07+10:00 ✘   error="unable to run workflow: stat /c/Users/awluc/OneDrive/GitHub/UMCCR/agc-dev-notes/examples/projects/myproj/workflow_inputs/workflow_inputs/data.txt: no such file or directory"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:read Arguments:workflow_inputs/read.inputs.json ContextName:ctx1}
caused by: unable to run workflow: stat /c/Users/awluc/OneDrive/GitHub/UMCCR/agc-dev-notes/examples/projects/myproj/workflow_inputs/workflow_inputs/data.txt: no such file or directory

Changing the input json to:

{
    "ReadFile.input_file": "data.txt"
}

resolves the error.

Given a user may want to create an input json in temp space, I would have thought that the paths should be relative to the current working directory instead of being relevant to the location of the input json file.

Configurable cluster size in Context

Problem Statement

Currently, there is no option to expand the size of the AWS Batch compute environment that is created on the backend and it is capped at 256 vCPUs. Recommend AGC default to this size, but allow the user to modify when setting up a Context in order to enable additional horizontal scaling.

Proposed Solution

NB I am still reading through the code base so I may be missing a couple things. General steps I think are correct below but please comment on Issue if not

  1. Add in a default CE size. Similar to what you see for ComputeType instantiation and in compute environment. Should also be included in the FARGATE if statement here.

  2. Add in maxCpuCores (naming convention to better align with GA4GH TES) to Batch creation from batch-stack

I think those are the main two things. I'm sure there are some additional, especially with testing, but hopefully this is a good start. Happy to take a look myself too. I don't think we have to modify context-stack.ts because it just passes along the props to BatchStack

Remove requirement for Docker during Context Deploy

Description

With the addition of WES as a Lambda service we started deploying the required python libs with containerization (the cdk default). This adds two minutes to each deployment and seems to occasionally get stuck during the container build. Adds a dependency on Docker.

Use Case

We need to reduce context deployment time and make it as reliable as possible.

Proposed Solution

Look into providing the required python libs as a layer/ zip file from our public assets account or add to customers S3 during account initialization?

Other information

Allow customization of head node resources

Description

Allow the user to define system resources for the head node of the workflow, which will then be mapped to an appropriate instance in the selected context

Use Case

My nextflow pipeline consists of many hundreds of thousands of processes. When I run it, the agc selects the smallest node available (in my context, this is a 2 CPU, 4 GB RAM instance). This is not suitable in this case. I get the following error:

Wed, 08 Dec 2021 23:09:13 +1100	org.codehaus.groovy.runtime.InvokerInvocationException: java.lang.OutOfMemoryError: Java heap space
Wed, 08 Dec 2021 23:09:13 +1100		at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:113)
Wed, 08 Dec 2021 23:09:13 +1100		at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
Wed, 08 Dec 2021 23:09:13 +1100		at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1268)
Wed, 08 Dec 2021 23:09:13 +1100		at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
Wed, 08 Dec 2021 23:09:13 +1100		at org.codehaus.groovy.runtime.InvokerHelper.invokePojoMethod(InvokerHelper.java:1017)
Wed, 08 Dec 2021 23:09:13 +1100		at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:1008)
Wed, 08 Dec 2021 23:09:13 +1100		at groovyx.gpars.actor.Actor.callDynamic(Actor.java:368)
Wed, 08 Dec 2021 23:09:13 +1100		at groovyx.gpars.actor.Actor.handleException(Actor.java:339)
Wed, 08 Dec 2021 23:09:13 +1100		at groovyx.gpars.actor.AbstractLoopingActor$1.registerError(AbstractLoopingActor.java:63)
Wed, 08 Dec 2021 23:09:13 +1100		at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:140)
Wed, 08 Dec 2021 23:09:13 +1100		at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
Wed, 08 Dec 2021 23:09:13 +1100		at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
Wed, 08 Dec 2021 23:09:13 +1100		at java.base/java.lang.Thread.run(Thread.java:829)
Wed, 08 Dec 2021 23:09:13 +1100	Caused by: java.lang.OutOfMemoryError: Java heap space
Wed, 08 Dec 2021 23:09:38 +1100	Execution aborted due to an unexpected error
Wed, 08 Dec 2021 23:10:03 +1100	Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Actor Thread 698"
Wed, 08 Dec 2021 23:10:14 +1100	Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "java-sdk-http-connection-reaper"
Wed, 08 Dec 2021 23:10:50 +1100	Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "AWSBatch-executor-457"
Wed, 08 Dec 2021 23:11:04 +1100	Execution aborted due to an unexpected error

Proposed Solution

Allow the addition of system resources to the engine key of the context:

contexts:
  someCtx:
    engines:
      - type: wdl
        engine: cromwell
        mem: 16GB

or

contexts:
  someCtx:
    engines:
      - type: wdl
        engine: cromwell
        instanceType: c5.2xlarge

Fix Golangci Lint Errors

Error: Error return value of `cmd.MarkFlagRequired` is not checked (errcheck)
  Error: ineffectual assignment to `err` (ineffassign)
  Error: ineffectual assignment to `err` (ineffassign)
  Error: composites: `github.com/aws/amazon-genomics-cli/common/aws/cwl.GetLogsInput` composite literal uses unkeyed fields (govet)
  Error: composites: `github.com/aws/amazon-genomics-cli/common/aws/cwl.GetLogsInput` composite literal uses unkeyed fields (govet)
  Error: composites: `github.com/aws/amazon-genomics-cli/common/aws/cwl.GetLogsInput` composite literal uses unkeyed fields (govet)
  Error: Error return value of `h.Write` is not checked (errcheck)
  Error: `successPrefix` is unused (deadcode)
  Error: `contexts` is unused (structcheck)
  Error: `contextStackInfo` is unused (structcheck)
  Error: `contextStatus` is unused (structcheck)
  Error: `contextInfo` is unused (structcheck)
  Error: `userId` is unused (structcheck)
  Error: `userEmail` is unused (structcheck)
  Error: `projectSpec` is unused (structcheck)
  Error: `contextSpec` is unused (structcheck)
  Error: `artifactBucket` is unused (structcheck)
  Error: `artifactUrl` is unused (structcheck)
  Error: `contextEnv` is unused (structcheck)
  Error: `wesUrl` is unused (structcheck)
  Error: `readBuckets` is unused (structcheck)
  Error: `readWriteBuckets` is unused (structcheck)
  Error: `outputBucket` is unused (structcheck)
  Error: `aId` is unused (structcheck)
  Error: `bSubStruct` is unused (structcheck)
  Error: Error return value of `os.Chdir` is not checked (errcheck)
  Error: Error return value of `os.Chdir` is not checked (errcheck)
  Error: SA5001: should check returned error before deferring specFile.Close() (staticcheck)
  Error: ineffectual assignment to `err` (ineffassign)
  Error: `gsi1PkAttrName` is unused (deadcode)
  Error: `gsi1SkAttrName` is unused (deadcode)
  Error: `origWesFactory` is unused (structcheck)
  Error: `origWesFactory` is unused (structcheck)
  Error: `projectSpec` is unused (structcheck)
  Error: `contextSpec` is unused (structcheck)
  Error: `userId` is unused (structcheck)
  Error: `instances` is unused (structcheck)
  Error: `filteredInstances` is unused (structcheck)
  Error: `instancesPerContext` is unused (structcheck)
  Error: `bucketName` is unused (structcheck)
  Error: `objectKey` is unused (structcheck)
  Error: `workflows` is unused (structcheck)
  Error: `workflowParams` is unused (structcheck)
  Error: `runId` is unused (structcheck)
  Error: `workflowSpec` is unused (structcheck)
  Error: `workflowEngine` is unused (structcheck)
  Error: `isLocal` is unused (structcheck)
  Error: `packPath` is unused (structcheck)
  Error: `input` is unused (structcheck)
  Error: `parsedSourceURL` is unused (structcheck)
  Error: `workflowUrl` is unused (structcheck)
  Error: `workflowAttachments` is unused (structcheck)
  Error: `inputUrl` is unused (structcheck)
  Error: `arguments` is unused (structcheck)
  Error: `attachments` is unused (structcheck)
  Error: `instanceToStop` is unused (structcheck)
  Error: `runContextName` is unused (structcheck)
  Error: `runLog` is unused (structcheck)
  Error: `contextStackInfo` is unused (structcheck)
  Error: `wesUrl` is unused (structcheck)
  Error: S1023: redundant `return` statement (gosimple)
  Error: func `(*Manager).getDeployedContexts` is unused (unused)
  Error: func `(*Manager).listWorkflowsFromInstances` is unused (unused)
  

inputs path for `workflow run --args` is relative to `agc-project.yaml` and not $PWD

Describe the Bug

When using agc workflow run and supplying an --args option to point to an inputs file for workflow arguments, agc expects the path to the inputs file to be relative to the agc-project.yaml file for the project. It cannot find the inputs file if specified relative to $PWD.

Steps to Reproduce

  • install AGC in $HOME
  • agc activate account
  • agc configure email
  • cd ~/agc/examples/demo-wdl-project/
  • agc context deploy -c spotCtx
  • cd ~/agc/examples/demo-wdl-project/workflows/read
  • agc workflow run read --args read.inputs.json -c spotCtx

Relevant Logs

Expected Behavior

This command should succeed if read.inputs.json is in $PWD:

$ agc workflow run read --args read.inputs.json -c spotCtx

Actual Behavior

Attempting to run the read workflow in the demo-wdl-project from within the workflows/read folder, pointing to the read.inputs.json file which is in $PWD fails:

$ cd ~/agc/examples/demo-wdl-project/workflows/read
$ ls

hello.txt  read.inputs.json  read.wdl

$ agc workflow run read --args read.inputs.json -c spotCtx
2021-10-08T19:29:50Z 𝒊  Running workflow. Workflow name: 'read', Arguments: 'read.inputs.json', Context: 'spotCtx'
2021-10-08T19:29:50Z ✘   error="unable to run workflow: couldn't read file read.inputs.json: open read.inputs.json: no such file or directory"
Error: an error occurred invoking 'workflow run'
with variables: {WorkflowName:read Arguments:read.inputs.json ContextName:spotCtx}
caused by: unable to run workflow: couldn't read file read.inputs.json: open read.inputs.json: no such file or directory

Running the workflow with the path to the read.inputs.json file relative to the agc-project.yaml file succeeds:

$ agc workflow run read --args workflows/read/read.inputs.json -c spotCtx
2021-10-08T19:31:00Z 𝒊  Running workflow. Workflow name: 'read', Arguments: 'workflows/read/read.inputs.json', Context: 'spotCtx'
30812600-fd44-4269-9fc1-c73be911b2b9

Running the workflow with the absolute path to the read.inputs.json file succeeds:

$ agc workflow run read --args `pwd`/workflows/read/read.inputs.json -c spotCtx
2021-10-08T23:05:39Z 𝒊  Running workflow. Workflow name: 'read', Arguments: '/home/cloudshell-user/agc/examples/demo-wdl-project/workflows/read/read.inputs.json', Context: 'spotCtx'
12a4dc8f-dd4b-4165-835b-051e4af45b14

Screenshots

Additional Context

Operating System: Amazon Linux 2 (CloudShell)
AGC Version: 1.0.0-41-gc5ac696
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Unclear documentation on behaviour when source URL is a directory

In the workflows page, it seems like it is going to explain how to provide directories as workflow URLs, but it actually only explains behaviour for workflow files:

When a directory is supplied as the sourceURL, Amazon Genomics CLI uses the following rules to determine the name of the main workflow file and any supporting files:

  1. If the source URL resolves to a single non-zipped file...
  2. The source URL resolves to a zipped file (.zip)...

cannot get logs from miniwdl workflow run

Describe the Bug

Running the demo-wdl-project/hello workflow with miniWDL succeeds, but logs cannot be retrieved.

Steps to Reproduce

  1. Use the examples/demo-wdl-project
  2. Start the miniContext
  3. Run the hello workflow and wait until it completes
  4. Retrieve workflow logs with agc logs workflow hello

Relevant Logs

Expected Behavior

Retrieve the log output for tasks in the workflow when running agc logs workflow

Actual Behavior

$ agc logs workflow hello -v
2021-11-12T06:30:56Z ↓  Checking AGC version...
2021-11-12T06:30:56Z 𝒊  Showing the logs for 'hello'
2021-11-12T06:30:56Z ↓  EstablishWesConnection(https://5u1j2xwf29.execute-api.us-east-2.amazonaws.com/prod/ga4gh/wes/v1)
2021-11-12T06:30:56Z ↓  Ping WES endpoint until we get an answer:
2021-11-12T06:30:56Z ↓  Attempt 1 of 3
2021-11-12T06:30:56Z ↓  Connected to WES endpoint
2021-11-12T06:30:56Z 𝒊  Showing logs for the latest run of the workflow. Run id: '0638355c-aab0-4adb-9599-730f4df9df89'
2021-11-12T06:30:56Z ↓  Showing logs for workflow run '0638355c-aab0-4adb-9599-730f4df9df89'
2021-11-12T06:30:56Z ↓  EstablishWesConnection(https://5u1j2xwf29.execute-api.us-east-2.amazonaws.com/prod/ga4gh/wes/v1)
2021-11-12T06:30:56Z ↓  Ping WES endpoint until we get an answer:
2021-11-12T06:30:56Z ↓  Attempt 1 of 3
2021-11-12T06:30:56Z ↓  Connected to WES endpoint
2021-11-12T06:30:56Z ↓  Unable to parse log time '2021-11-12T06:28:41.376000' to ISO 8601, skipping
2021-11-12T06:30:56Z ↓  Unable to parse log time '2021-11-12T06:28:41.552000' to ISO 8601, skipping

Screenshots

Additional Context

Operating System: Amazon Linux 2
AGC Version: 1.1.0 (nominal), 1.0.1-1-gfd7b94a (reported by --version)
Was AGC setup with a custom bucket: no
Was AGC setup with a custom VPC: no

compute environment MaxvCPUs

Question:
When we run very large Nextflow workflows through Batch, we often need significantly more than 256 vCPUs within a Compute Environment to avoid extremely large run times and to maximize the benefit of using cloud resources. The Compute Environments that started with my context were set at 256. I know that this is one of the few settings that can actually be changed on the fly within the AWS console but could this be set as part of the context similar to the way that instance types can be selected? Typically we have set ours at 5000.

cannot get output from miniwdl workflow run

Describe the Bug

Running the demo-wdl-project/hello workflow with miniWDL succeeds, but workflow output is empty.

Steps to Reproduce

  1. Use the examples/demo-wdl-project
  2. Start the miniContext
  3. Run the hello workflow and wait until it completes
  4. Retrieve workflow output with agc workflow output

Relevant Logs

Expected Behavior

When I run the hello workflow using a context that uses Cromwell I get the following:

$ agc workflow output da599f1c-2343-4fa8-91e1-656f19ee4ffa
2021-11-12T07:16:19Z 𝒊  Obtaining final outputs for workflow runId 'da599f1c-2343-4fa8-91e1-656f19ee4ffa'
OUTPUT	id	da599f1c-2343-4fa8-91e1-656f19ee4ffa
OUTPUT	outputs.hello_agc.hello.out	Hello Amazon Genomics CLI!

The WES GetRunLog response for this workflow is:

$ awscurl https://tns6dzlxa4.execute-api.us-east-2.amazonaws.com/prod/ga4gh/wes/v1/runs/da599f1c-2343-4fa8-91e1-656f19ee4ffa
{
  "outputs": {
    "id": "da599f1c-2343-4fa8-91e1-656f19ee4ffa",
    "outputs": {
      "hello_agc.hello.out": "Hello Amazon Genomics CLI!"
    }
  },
  "request": {
    "workflow_params": {},
    "workflow_type": "WDL",
    "workflow_type_version": "1.0"
  },
  "run_id": "da599f1c-2343-4fa8-91e1-656f19ee4ffa",
  "state": "COMPLETE",
  "task_logs": [
    {
      "cmd": [
        "echo \"Hello Amazon Genomics CLI!\""
      ],
      "end_time": "2021-11-12T06:53:00.198Z",
      "exit_code": 0,
      "name": "hello_agc.hello|bdd91c98-8f64-41eb-b52f-594051e6fa8c",
      "start_time": "2021-11-12T06:50:20.362Z",
      "stderr": "s3://agc-111122223333-us-east-2/project/Demo/userid/userJKP3z/context/myContext/cromwell-execution/hello_agc/da599f1c-2343-4fa8-91e1-656f19ee4ffa/call-hello/hello-stderr.log",
      "stdout": "s3://agc-111122223333-us-east-2/project/Demo/userid/userJKP3z/context/myContext/cromwell-execution/hello_agc/da599f1c-2343-4fa8-91e1-656f19ee4ffa/call-hello/hello-stdout.log"
    }
  ]
}

Actual Behavior

$ agc workflow output 0638355c-aab0-4adb-9599-730f4df9df89
2021-11-12T07:16:50Z 𝒊  Obtaining final outputs for workflow runId '0638355c-aab0-4adb-9599-730f4df9df89'
OUTPUT	id	0638355c-aab0-4adb-9599-730f4df9df89

The WES GetRunLog response for this workflow is:

$ awscurl https://5u1j2xwf29.execute-api.us-east-2.amazonaws.com/prod/ga4gh/wes/v1/runs/0638355c-aab0-4adb-9599-730f4df9df89
{
  "outputs": {
    "id": "0638355c-aab0-4adb-9599-730f4df9df89"
  },
  "run_id": "0638355c-aab0-4adb-9599-730f4df9df89",
  "run_log": {
    "cmd": [
      "s3://agc-111122223333-us-east-2/project/Demo/userid/userJKP3z/context/miniContext/workflow/hello/workflow.zip"
    ],
    "end_time": "2021-11-12T06:29:08.782000",
    "exit_code": 0,
    "name": "agc-run-workflow|0638355c-aab0-4adb-9599-730f4df9df89",
    "start_time": "2021-11-12T06:25:58.517000",
    "stdout": "MiniWdlEngineMiniwdlHea-d75dc54fb817c66/default/9c9c6b54f7bf4b64ba00f7c26c8400d6"
  },
  "state": "COMPLETE",
  "task_logs": [
    {
      "cmd": [
        "/bin/bash",
        "-c",
        "cd /mnt/efs/0638355c-aab0-4adb-9599-730f4df9df89/1/call-hello/work && bash ../command >> ../stdout.txt 2> >(tee -a ../stderr.txt >&2) && sync"
      ],
      "end_time": "2021-11-12T06:28:41.552000",
      "exit_code": 0,
      "name": "hello-scrqmpng|ea75c2a7-4a12-40a8-a7dc-2355cd8c8ce2",
      "start_time": "2021-11-12T06:28:41.376000",
      "stdout": "hello-scrqmpng/default/7c507faef625404ca3464797230cc6f0"
    }
  ]
}

Screenshots

Additional Context

Operating System: Amazon Linux 2
AGC Version: 1.1.0 (nominal), 1.0.1-1-gfd7b94a (reported by --version)
Was AGC setup with a custom bucket: no
Was AGC setup with a custom VPC: no

Add support for S3 server-side-encrypted-KMS (SSE-KMS)

Description

AGC cannot be used if server-side-encryption with KMS is being used in the aws account.

Steps to reproduce:

  • perform cdk bootstrap -> BucketA (generates cdk s3 bootstrap bucket)
  • create your data bucket: e.g. s3://hidden95 -> BucketB
  • add bucket policy enforcing kms encryption on upload of content. AWS recommended way to enforce SSE KMS encryption on upload
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::hidden95/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::hidden95/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}
  • agc account activate --bucket hidden95
  • agc context deploy --context spotContext

Result: agc fails, since it cannot upload to the S3 bucket without adding sse-kms to the upload.

Use Case

Large companies often use S3 bucket policies to prevent upload of unencrypted objects to S3, as suggested in amazon recommend sse-kms

This means that AGC in its current state cannot be used by these companies.

Proposed Solution

Suggested by @elliot-smith
To enable SSE-KMS, AGC will need the following modifications:

  • The AGC command to upload S3 assets will need to optionally supply the required "x-amz-server-side-encryption" header
  • The Batch instances will need to have the "kms:Decrypt" IAM property so that the key can be decrypted before it is downloaded
  • The documentation will need to be updated to specify this and most likely call out that enabling this functionally requires additional minimum IAM policies

Other information

Feature Request: SSO Support

AGC does not recognise logging in via sso.

$ aws sso login

$ printenv | grep AWS_
AWS_PROFILE=dev
AWS_DEFAULT_REGION=ap-southeast-2

$ agc context status
Error: operation error CloudFormation: ListStacks, https response error StatusCode: 403, RequestID: 5ebed11d-6a00-474d-bd02-d59763506258, api error ExpiredToken: The security token included in the request is expired

Workaround

I can use yawsso to export the sso login creds as env vars which works in the meantime:

$ source <(yawsso -p "${AWS_PROFILE}" -e)

$ printenv | grep AWS_
AWS_PROFILE=dev
AWS_DEFAULT_REGION=ap-southeast-2
AWS_SECRET_ACCESS_KEY=...
AWS_ACCESS_KEY_ID=...
AWS_SESSION_TOKEN=...

$ agc context status
INSTANCE        ctx1            STARTED true 

installation instructions need correction

The doc says:

unzip amazon-genomics-cli.zip -d agc
./agc/install.sh

which results:
-bash: ./agc/install.sh: No such file or directory

Because the latest release v1.1.0 unzips the main folder into amazon-genomics-cli subfolder under agc(on Mac OSX):

unzip amazon-genomics-cli.zip -d agc
ls agc/
amazon-genomics-cli

Ability to change EBS volume type

Description
Most of the genomics jobs do not need a SSD volume. Presently General Purpose SSD (gp3) - Storage is attached by default. We could use Throughput Optimized HDD (st1) Volumes to save costs. We need ability to change the volume type of the disk according to the workflow.

Proposed Solution
When deploying the context, provide data volume type.

Design for better `--args` experience

Description

The use of the --args has overlaps with the multi file workflow spec, however it was not considered when this was designed leading to inconsistencies and a lack of clarity around how it should be handled with respect to things such as inputs.json, options.json, MANIFEST.json etc

Proposed Solution

Produce a design doc and once approved create follow up issued for implementation of the design

`The stack named Agc-Context-MyWorkflow--MyContext failed creation`

Describe the Bug
I'm getting The stack named Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context failed creation when I try to deploy my context.

The only concrete error I can see is:

2021-12-07T10:49:15+11:00 ✘ ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

Steps to Reproduce

  • Have this context:

    contexts:
      T2Context:
        instanceTypes:
          - "t2.micro"
          - "t2.small"
          - "t2.medium"
          - "t2.large"
        engines:
          - type: nextflow
            engine: nextflow
    
  • agc context deploy --context T2Context

Relevant Logs
Note: I've replaced my workflow name and project ID with MyWorkflow and WORKFLOW_ID respectively:

2021-12-07T10:41:58+11:00 𝒊  Deploying context(s)
Deploying resources for context 'T2Context'... [---------------------------------------------------------------------------------------->_________________________________________________________________________________________]7m14s2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  Sending build context to Docker daemon  3.584kB
2021-12-07T10:49:15+11:00 ✘  Step 1/9 : ARG IMAGE=public.ecr.aws/sam/build-python3.7
2021-12-07T10:49:15+11:00 ✘  Step 2/9 : FROM $IMAGE
2021-12-07T10:49:15+11:00 ✘   ---> fa475cc11d6b
2021-12-07T10:49:15+11:00 ✘  Step 3/9 : RUN yum -q list installed rsync &>/dev/null || yum install -y rsync
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> c0e7d0a0944f
2021-12-07T10:49:15+11:00 ✘  Step 4/9 : RUN pip install --upgrade pip
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 9c65231c00fa
2021-12-07T10:49:15+11:00 ✘  Step 5/9 : RUN pip install pipenv poetry
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> cdb7eca2cbd6
2021-12-07T10:49:15+11:00 ✘  Step 6/9 : WORKDIR /var/dependencies
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> d865968c4a9c
2021-12-07T10:49:15+11:00 ✘  Step 7/9 : COPY Pipfile* pyproject* poetry* requirements.tx[t] ./
2021-12-07T10:49:15+11:00 ✘   ---> 478f78ae1e51
2021-12-07T10:49:15+11:00 ✘  Step 8/9 : RUN [ -f 'Pipfile' ] && pipenv lock -r >requirements.txt;     [ -f 'poetry.lock' ] && poetry export --with-credentials --format requirements.txt --output requirements.txt;     [ -f 'requirements.txt' ] && pip install -r requirements.txt -t .;
2021-12-07T10:49:15+11:00 ✘   ---> Running in a06cdf935597
2021-12-07T10:49:15+11:00 ✘  Collecting boto3
2021-12-07T10:49:15+11:00 ✘    Downloading boto3-1.20.21-py3-none-any.whl (131 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting boto3-stubs[batch,logs,resourcegroupstaggingapi,s3]
2021-12-07T10:49:15+11:00 ✘    Downloading boto3_stubs-1.20.21-py3-none-any.whl (55 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting connexion
2021-12-07T10:49:15+11:00 ✘    Downloading connexion-2.9.0-py2.py3-none-any.whl (84 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting docker
2021-12-07T10:49:15+11:00 ✘    Downloading docker-5.0.3-py2.py3-none-any.whl (146 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting flask
2021-12-07T10:49:15+11:00 ✘    Downloading Flask-2.0.2-py3-none-any.whl (95 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting paste
2021-12-07T10:49:15+11:00 ✘    Downloading Paste-3.5.0-py2.py3-none-any.whl (593 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting pytest
2021-12-07T10:49:15+11:00 ✘    Downloading pytest-6.2.5-py3-none-any.whl (280 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting python_dateutil
2021-12-07T10:49:15+11:00 ✘    Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting requests
2021-12-07T10:49:15+11:00 ✘    Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting serverless-wsgi
2021-12-07T10:49:15+11:00 ✘    Downloading serverless_wsgi-2.0.2-py2.py3-none-any.whl (10 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting setuptools
2021-12-07T10:49:15+11:00 ✘    Downloading setuptools-59.5.0-py3-none-any.whl (952 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting swagger-ui-bundle
2021-12-07T10:49:15+11:00 ✘    Downloading swagger_ui_bundle-0.0.9-py3-none-any.whl (6.2 MB)
2021-12-07T10:49:15+11:00 ✘  Collecting werkzeug
2021-12-07T10:49:15+11:00 ✘    Downloading Werkzeug-2.0.2-py3-none-any.whl (288 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting jmespath<1.0.0,>=0.7.1
2021-12-07T10:49:15+11:00 ✘    Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting s3transfer<0.6.0,>=0.5.0
2021-12-07T10:49:15+11:00 ✘    Downloading s3transfer-0.5.0-py3-none-any.whl (79 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting botocore<1.24.0,>=1.23.21
2021-12-07T10:49:15+11:00 ✘    Downloading botocore-1.23.21-py3-none-any.whl (8.4 MB)
2021-12-07T10:49:15+11:00 ✘  Collecting botocore-stubs
2021-12-07T10:49:15+11:00 ✘    Downloading botocore_stubs-1.23.21-py3-none-any.whl (41 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting mypy-boto3-logs>=1.20.0
2021-12-07T10:49:15+11:00 ✘    Downloading mypy_boto3_logs-1.20.1-py3-none-any.whl (29 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting mypy-boto3-s3>=1.20.0
2021-12-07T10:49:15+11:00 ✘    Downloading mypy_boto3_s3-1.20.17-py3-none-any.whl (84 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting mypy-boto3-resourcegroupstaggingapi>=1.20.0
2021-12-07T10:49:15+11:00 ✘    Downloading mypy_boto3_resourcegroupstaggingapi-1.20.1-py3-none-any.whl (20 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting mypy-boto3-batch>=1.20.0
2021-12-07T10:49:15+11:00 ✘    Downloading mypy_boto3_batch-1.20.10-py3-none-any.whl (29 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting clickclick<21,>=1.2
2021-12-07T10:49:15+11:00 ✘    Downloading clickclick-20.10.2-py2.py3-none-any.whl (7.4 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting jsonschema<4,>=2.5.1
2021-12-07T10:49:15+11:00 ✘    Downloading jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting werkzeug
2021-12-07T10:49:15+11:00 ✘    Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting PyYAML<6,>=5.1
2021-12-07T10:49:15+11:00 ✘    Downloading PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting inflection<0.6,>=0.3.1
2021-12-07T10:49:15+11:00 ✘    Downloading inflection-0.5.1-py2.py3-none-any.whl (9.5 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting flask
2021-12-07T10:49:15+11:00 ✘    Downloading Flask-1.1.4-py2.py3-none-any.whl (94 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting openapi-spec-validator<0.4,>=0.2.4
2021-12-07T10:49:15+11:00 ✘    Downloading openapi_spec_validator-0.3.1-py3-none-any.whl (32 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting websocket-client>=0.32.0
2021-12-07T10:49:15+11:00 ✘    Downloading websocket_client-1.2.1-py2.py3-none-any.whl (52 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting itsdangerous<2.0,>=0.24
2021-12-07T10:49:15+11:00 ✘    Downloading itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting click<8.0,>=5.1
2021-12-07T10:49:15+11:00 ✘    Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting Jinja2<3.0,>=2.10.1
2021-12-07T10:49:15+11:00 ✘    Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting six>=1.4.0
2021-12-07T10:49:15+11:00 ✘    Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting packaging
2021-12-07T10:49:15+11:00 ✘    Downloading packaging-21.3-py3-none-any.whl (40 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting py>=1.8.2
2021-12-07T10:49:15+11:00 ✘    Downloading py-1.11.0-py2.py3-none-any.whl (98 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting attrs>=19.2.0
2021-12-07T10:49:15+11:00 ✘    Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting toml
2021-12-07T10:49:15+11:00 ✘    Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting iniconfig
2021-12-07T10:49:15+11:00 ✘    Downloading iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting pluggy<2.0,>=0.12
2021-12-07T10:49:15+11:00 ✘    Downloading pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting charset-normalizer~=2.0.0
2021-12-07T10:49:15+11:00 ✘    Downloading charset_normalizer-2.0.9-py3-none-any.whl (39 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting idna<4,>=2.5
2021-12-07T10:49:15+11:00 ✘    Using cached idna-3.3-py3-none-any.whl (61 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting certifi>=2017.4.17
2021-12-07T10:49:15+11:00 ✘    Using cached certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting urllib3<1.27,>=1.21.1
2021-12-07T10:49:15+11:00 ✘    Using cached urllib3-1.26.7-py2.py3-none-any.whl (138 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting serverless-wsgi
2021-12-07T10:49:15+11:00 ✘    Downloading serverless_wsgi-2.0.1-py2.py3-none-any.whl (10 kB)
2021-12-07T10:49:15+11:00 ✘    Downloading serverless_wsgi-2.0.0-py2.py3-none-any.whl (10 kB)
2021-12-07T10:49:15+11:00 ✘    Downloading serverless_wsgi-1.7.8-py2.py3-none-any.whl (10 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting MarkupSafe>=0.23
2021-12-07T10:49:15+11:00 ✘    Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting pyrsistent>=0.14.0
2021-12-07T10:49:15+11:00 ✘    Downloading pyrsistent-0.18.0-cp39-cp39-manylinux1_x86_64.whl (117 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting openapi-schema-validator
2021-12-07T10:49:15+11:00 ✘    Downloading openapi_schema_validator-0.1.5-py3-none-any.whl (7.9 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting pyparsing!=3.0.5,>=2.0.2
2021-12-07T10:49:15+11:00 ✘    Using cached pyparsing-3.0.6-py3-none-any.whl (97 kB)
2021-12-07T10:49:15+11:00 ✘  Collecting isodate
2021-12-07T10:49:15+11:00 ✘    Downloading isodate-0.6.0-py2.py3-none-any.whl (45 kB)
2021-12-07T10:49:15+11:00 ✘  Installing collected packages: six, setuptools, pyrsistent, attrs, urllib3, python-dateutil, MarkupSafe, jsonschema, jmespath, isodate, werkzeug, PyYAML, pyparsing, openapi-schema-validator, Jinja2, itsdangerous, idna, click, charset-normalizer, certifi, botocore-stubs, botocore, websocket-client, toml, s3transfer, requests, py, pluggy, packaging, openapi-spec-validator, mypy-boto3-s3, mypy-boto3-resourcegroupstaggingapi, mypy-boto3-logs, mypy-boto3-batch, iniconfig, inflection, flask, clickclick, boto3-stubs, swagger-ui-bundle, serverless-wsgi, pytest, paste, docker, connexion, boto3
2021-12-07T10:49:15+11:00 ✘  ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
2021-12-07T10:49:15+11:00 ✘  poetry 1.1.12 requires packaging<21.0,>=20.4, but you have packaging 21.3 which is incompatible.
2021-12-07T10:49:15+11:00 ✘  Successfully installed Jinja2-2.11.3 MarkupSafe-2.0.1 PyYAML-5.4.1 attrs-21.2.0 boto3-1.20.21 boto3-stubs-1.20.21 botocore-1.23.21 botocore-stubs-1.23.21 certifi-2021.10.8 charset-normalizer-2.0.9 click-7.1.2 clickclick-20.10.2 connexion-2.9.0 docker-5.0.3 flask-1.1.4 idna-3.3 inflection-0.5.1 iniconfig-1.1.1 isodate-0.6.0 itsdangerous-1.1.0 jmespath-0.10.0 jsonschema-3.2.0 mypy-boto3-batch-1.20.10 mypy-boto3-logs-1.20.1 mypy-boto3-resourcegroupstaggingapi-1.20.1 mypy-boto3-s3-1.20.17 openapi-schema-validator-0.1.5 openapi-spec-validator-0.3.1 packaging-21.3 paste-3.5.0 pluggy-1.0.0 py-1.11.0 pyparsing-3.0.6 pyrsistent-0.18.0 pytest-6.2.5 python-dateutil-2.8.2 requests-2.26.0 s3transfer-0.5.0 serverless-wsgi-1.7.8 setuptools-59.5.0 six-1.16.0 swagger-ui-bundle-0.0.9 toml-0.10.2 urllib3-1.26.7 websocket-client-1.2.1 werkzeug-1.0.1
2021-12-07T10:49:15+11:00 ✘  WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
2021-12-07T10:49:15+11:00 ✘  Removing intermediate container a06cdf935597
2021-12-07T10:49:15+11:00 ✘   ---> ce10a8f8f437
2021-12-07T10:49:15+11:00 ✘  Step 9/9 : CMD [ "python" ]
2021-12-07T10:49:15+11:00 ✘   ---> Running in afa01f30899e
2021-12-07T10:49:15+11:00 ✘  Removing intermediate container afa01f30899e
2021-12-07T10:49:15+11:00 ✘   ---> 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully built 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully tagged cdk-05d5ae453ca1ce5ca1517b5308e98ee9d4691ae8ad1ee6d0b76f534ed71072db:latest
2021-12-07T10:49:15+11:00 ✘  Bundling asset Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context/nextflow/NextflowWesAdapterLambda/Code/Stage...
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split insteadhhyd
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  Sending build context to Docker daemon  3.584kB
2021-12-07T10:49:15+11:00 ✘  Step 1/9 : ARG IMAGE=public.ecr.aws/sam/build-python3.7
2021-12-07T10:49:15+11:00 ✘  Step 2/9 : FROM $IMAGE
2021-12-07T10:49:15+11:00 ✘   ---> fa475cc11d6b
2021-12-07T10:49:15+11:00 ✘  Step 3/9 : RUN yum -q list installed rsync &>/dev/null || yum install -y rsync
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> c0e7d0a0944f
2021-12-07T10:49:15+11:00 ✘  Step 4/9 : RUN pip install --upgrade pip
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 9c65231c00fa
2021-12-07T10:49:15+11:00 ✘  Step 5/9 : RUN pip install pipenv poetry
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> cdb7eca2cbd6
2021-12-07T10:49:15+11:00 ✘  Step 6/9 : WORKDIR /var/dependencies
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> d865968c4a9c
2021-12-07T10:49:15+11:00 ✘  Step 7/9 : COPY Pipfile* pyproject* poetry* requirements.tx[t] ./
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 478f78ae1e51
2021-12-07T10:49:15+11:00 ✘  Step 8/9 : RUN [ -f 'Pipfile' ] && pipenv lock -r >requirements.txt;     [ -f 'poetry.lock' ] && poetry export --with-credentials --format requirements.txt --output requirements.txt;     [ -f 'requirements.txt' ] && pip install -r requirements.txt -t .;
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> ce10a8f8f437
2021-12-07T10:49:15+11:00 ✘  Step 9/9 : CMD [ "python" ]
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully built 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully tagged cdk-d1deb6dba117c334160b977f9fe1582fc0d53cf22774b7d889dbb5a5cfad2f21:latest
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  [WARNING] monocdk.Arn#parse is deprecated.
2021-12-07T10:49:15+11:00 ✘    use split instead
2021-12-07T10:49:15+11:00 ✘    This API will be removed in the next major release.
2021-12-07T10:49:15+11:00 ✘  Sending build context to Docker daemon  3.584kB
2021-12-07T10:49:15+11:00 ✘  Step 1/9 : ARG IMAGE=public.ecr.aws/sam/build-python3.7
2021-12-07T10:49:15+11:00 ✘  Step 2/9 : FROM $IMAGE
2021-12-07T10:49:15+11:00 ✘   ---> fa475cc11d6b
2021-12-07T10:49:15+11:00 ✘  Step 3/9 : RUN yum -q list installed rsync &>/dev/null || yum install -y rsync
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> c0e7d0a0944f
2021-12-07T10:49:15+11:00 ✘  Step 4/9 : RUN pip install --upgrade pip
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 9c65231c00fa
2021-12-07T10:49:15+11:00 ✘  Step 5/9 : RUN pip install pipenv poetry
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> cdb7eca2cbd6
2021-12-07T10:49:15+11:00 ✘  Step 6/9 : WORKDIR /var/dependencies
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> d865968c4a9c
2021-12-07T10:49:15+11:00 ✘  Step 7/9 : COPY Pipfile* pyproject* poetry* requirements.tx[t] ./
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 478f78ae1e51
2021-12-07T10:49:15+11:00 ✘  Step 8/9 : RUN [ -f 'Pipfile' ] && pipenv lock -r >requirements.txt;     [ -f 'poetry.lock' ] && poetry export --with-credentials --format requirements.txt --output requirements.txt;     [ -f 'requirements.txt' ] && pip install -r requirements.txt -t .;
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> ce10a8f8f437
2021-12-07T10:49:15+11:00 ✘  Step 9/9 : CMD [ "python" ]
2021-12-07T10:49:15+11:00 ✘   ---> Using cache
2021-12-07T10:49:15+11:00 ✘   ---> 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully built 74e946a87f23
2021-12-07T10:49:15+11:00 ✘  Successfully tagged cdk-462a8af19e4855dd01fc4a658332faf675c6e1acbe939921a8dd6ca92cad7d02:latest
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context: deploying...
2021-12-07T10:49:15+11:00 ✘  [0%] start: Publishing 44dc0e30693c9b69437e7bea1bd8e1eaeb85c0f16060547b613fe57ae6d3fce2:current
2021-12-07T10:49:15+11:00 ✘  [25%] success: Published 44dc0e30693c9b69437e7bea1bd8e1eaeb85c0f16060547b613fe57ae6d3fce2:current
2021-12-07T10:49:15+11:00 ✘  [25%] start: Publishing b120b13d9d868c7622e7db1b68bae4c0f82ffd0227b8c15f2cef38e186ff3827:current
2021-12-07T10:49:15+11:00 ✘  [50%] success: Published b120b13d9d868c7622e7db1b68bae4c0f82ffd0227b8c15f2cef38e186ff3827:current
2021-12-07T10:49:15+11:00 ✘  [50%] start: Publishing 9fea3ac0fe4353b8a3748fd578172d38d33c8b04ae6cd7203f400b318fd84aee:current
2021-12-07T10:49:15+11:00 ✘  [75%] success: Published 9fea3ac0fe4353b8a3748fd578172d38d33c8b04ae6cd7203f400b318fd84aee:current
2021-12-07T10:49:15+11:00 ✘  [75%] start: Publishing 685fbaae8ed2b4482614c4988a7b19a673f4f538b9c22baff73c36a5dfad914e:current
2021-12-07T10:49:15+11:00 ✘  [100%] success: Published 685fbaae8ed2b4482614c4988a7b19a673f4f538b9c22baff73c36a5dfad914e:current
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context: creating CloudFormation changeset...
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:44:43 am | REVIEW_IN_PROGRESS   | AWS::CloudFormation::Stack | Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context User Initiated
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:44:52 am | CREATE_IN_PROGRESS   | AWS::CloudFormation::Stack | Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context User Initiated
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:45:22 am | CREATE_IN_PROGRESS   | AWS::CDK::Metadata         | Batch/CDKMetadata/Default (CDKMetadata) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:45:22 am | CREATE_IN_PROGRESS   | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:45:23 am | CREATE_IN_PROGRESS   | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) Resource creation Initiated
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:45:24 am | CREATE_IN_PROGRESS   | AWS::CDK::Metadata         | Batch/CDKMetadata/Default (CDKMetadata) Resource creation Initiated
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:45:24 am | CREATE_COMPLETE      | AWS::CDK::Metadata         | Batch/CDKMetadata/Default (CDKMetadata) 
2021-12-07T10:49:15+11:00 ✘  1/4 Currently in progress: Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context, BatchNestedStackBatchNestedStackResourceAE129AA6
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:48:27 am | CREATE_FAILED        | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) Embedded stack arn:aws:cloudformation:us-east-1:ACCOUNT_ID:stack/Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Contex-BatchNestedStackBatchNestedStackResourceAE-1JU3NSIXH19J7/945beba0-56ee-11ec-9317-0a73f83d45f9 was not successfully created: The following resource(s) failed to create: [TaskBatchComputeEnvironmentC3913A23]. 
2021-12-07T10:49:15+11:00 ✘     new NestedStack (/home/migwell/.agc/cdk/node_modules/monocdk/core/lib/nested-stack.ts:77:21)
2021-12-07T10:49:15+11:00 ✘     \_ new BatchStack (/home/migwell/.agc/cdk/lib/stacks/nested/batch-stack.ts:35:5)
2021-12-07T10:49:15+11:00 ✘     \_ ContextStack.renderBatchStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:118:12)
2021-12-07T10:49:15+11:00 ✘     \_ ContextStack.renderNextflowStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:63:29)
2021-12-07T10:49:15+11:00 ✘     \_ new ContextStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:33:14)
2021-12-07T10:49:15+11:00 ✘     \_ Object.<anonymous> (/home/migwell/.agc/cdk/apps/context/app.ts:14:1)
2021-12-07T10:49:15+11:00 ✘     \_ Module._compile (node:internal/modules/cjs/loader:1101:14)
2021-12-07T10:49:15+11:00 ✘     \_ Module.m._compile (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1371:23)
2021-12-07T10:49:15+11:00 ✘     \_ Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
2021-12-07T10:49:15+11:00 ✘     \_ Object.require.extensions.<computed> [as .ts] (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1374:12)
2021-12-07T10:49:15+11:00 ✘     \_ Module.load (node:internal/modules/cjs/loader:981:32)
2021-12-07T10:49:15+11:00 ✘     \_ Function.Module._load (node:internal/modules/cjs/loader:822:12)
2021-12-07T10:49:15+11:00 ✘     \_ Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
2021-12-07T10:49:15+11:00 ✘     \_ main (/home/migwell/.agc/cdk/node_modules/ts-node/src/bin.ts:331:12)
2021-12-07T10:49:15+11:00 ✘     \_ Object.<anonymous> (/home/migwell/.agc/cdk/node_modules/ts-node/src/bin.ts:482:3)
2021-12-07T10:49:15+11:00 ✘     \_ Module._compile (node:internal/modules/cjs/loader:1101:14)
2021-12-07T10:49:15+11:00 ✘     \_ Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
2021-12-07T10:49:15+11:00 ✘     \_ Module.load (node:internal/modules/cjs/loader:981:32)
2021-12-07T10:49:15+11:00 ✘     \_ Function.Module._load (node:internal/modules/cjs/loader:822:12)
2021-12-07T10:49:15+11:00 ✘     \_ Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
2021-12-07T10:49:15+11:00 ✘     \_ node:internal/main/run_main_module:17:47
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:48:28 am | ROLLBACK_IN_PROGRESS | AWS::CloudFormation::Stack | Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context The following resource(s) failed to create: [BatchNestedStackBatchNestedStackResourceAE129AA6]. Rollback requested by user.
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:48:56 am | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:48:56 am | DELETE_IN_PROGRESS   | AWS::CDK::Metadata         | Batch/CDKMetadata/Default (CDKMetadata) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 0/4 | 10:48:58 am | DELETE_COMPLETE      | AWS::CDK::Metadata         | Batch/CDKMetadata/Default (CDKMetadata) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 1/4 | 10:49:07 am | DELETE_COMPLETE      | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) 
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 2/4 | 10:49:08 am | ROLLBACK_COMPLETE    | AWS::CloudFormation::Stack | Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context 
2021-12-07T10:49:15+11:00 ✘  
2021-12-07T10:49:15+11:00 ✘  Failed resources:
2021-12-07T10:49:15+11:00 ✘  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context | 10:48:27 am | CREATE_FAILED        | AWS::CloudFormation::Stack | Batch.NestedStack/Batch.NestedStackResource (BatchNestedStackBatchNestedStackResourceAE129AA6) Embedded stack arn:aws:cloudformation:us-east-1:ACCOUNT_ID:stack/Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Contex-BatchNestedStackBatchNestedStackResourceAE-1JU3NSIXH19J7/945beba0-56ee-11ec-9317-0a73f83d45f9 was not successfully created: The following resource(s) failed to create: [TaskBatchComputeEnvironmentC3913A23]. 
2021-12-07T10:49:15+11:00 ✘     new NestedStack (/home/migwell/.agc/cdk/node_modules/monocdk/core/lib/nested-stack.ts:77:21)
2021-12-07T10:49:15+11:00 ✘     \_ new BatchStack (/home/migwell/.agc/cdk/lib/stacks/nested/batch-stack.ts:35:5)
2021-12-07T10:49:15+11:00 ✘     \_ ContextStack.renderBatchStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:118:12)
2021-12-07T10:49:15+11:00 ✘     \_ ContextStack.renderNextflowStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:63:29)
2021-12-07T10:49:15+11:00 ✘     \_ new ContextStack (/home/migwell/.agc/cdk/lib/stacks/context-stack.ts:33:14)
2021-12-07T10:49:15+11:00 ✘     \_ Object.<anonymous> (/home/migwell/.agc/cdk/apps/context/app.ts:14:1)
2021-12-07T10:49:15+11:00 ✘     \_ Module._compile (node:internal/modules/cjs/loader:1101:14)
2021-12-07T10:49:15+11:00 ✘     \_ Module.m._compile (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1371:23)
2021-12-07T10:49:15+11:00 ✘     \_ Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
2021-12-07T10:49:15+11:00 ✘     \_ Object.require.extensions.<computed> [as .ts] (/home/migwell/.agc/cdk/node_modules/ts-node/src/index.ts:1374:12)
2021-12-07T10:49:15+11:00 ✘     \_ Module.load (node:internal/modules/cjs/loader:981:32)
2021-12-07T10:49:15+11:00 ✘     \_ Function.Module._load (node:internal/modules/cjs/loader:822:12)
2021-12-07T10:49:15+11:00 ✘     \_ Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
2021-12-07T10:49:15+11:00 ✘     \_ main (/home/migwell/.agc/cdk/node_modules/ts-node/src/bin.ts:331:12)
2021-12-07T10:49:15+11:00 ✘     \_ Object.<anonymous> (/home/migwell/.agc/cdk/node_modules/ts-node/src/bin.ts:482:3)
2021-12-07T10:49:15+11:00 ✘     \_ Module._compile (node:internal/modules/cjs/loader:1101:14)
2021-12-07T10:49:15+11:00 ✘     \_ Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
2021-12-07T10:49:15+11:00 ✘     \_ Module.load (node:internal/modules/cjs/loader:981:32)
2021-12-07T10:49:15+11:00 ✘     \_ Function.Module._load (node:internal/modules/cjs/loader:822:12)
2021-12-07T10:49:15+11:00 ✘     \_ Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
2021-12-07T10:49:15+11:00 ✘     \_ node:internal/main/run_main_module:17:47
Deploying resources for context 'T2Context'... [---------------------------------------------------------------------------------------->_________________________________________________________________________________________]7m14s2021-12-07T10:49:15+11:00 ✘  
2021-12-07T10:49:15+11:00 ✘   ❌  Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context failed: Error: The stack named Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
2021-12-07T10:49:15+11:00 ✘      at Object.waitForStackDeploy (/home/migwell/.agc/cdk/node_modules/aws-cdk/lib/api/util/cloudformation.ts:307:11)
2021-12-07T10:49:15+11:00 ✘      at processTicksAndRejections (node:internal/process/task_queues:96:5)
2021-12-07T10:49:15+11:00 ✘      at prepareAndExecuteChangeSet (/home/migwell/.agc/cdk/node_modules/aws-cdk/lib/api/deploy-stack.ts:351:26)
2021-12-07T10:49:15+11:00 ✘      at CdkToolkit.deploy (/home/migwell/.agc/cdk/node_modules/aws-cdk/lib/cdk-toolkit.ts:194:24)
2021-12-07T10:49:15+11:00 ✘      at initCommandLine (/home/migwell/.agc/cdk/node_modules/aws-cdk/bin/cdk.ts:267:9)
2021-12-07T10:49:15+11:00 ✘  The stack named Agc-Context-MyWorkflow-michaelrmi2UTCAG-T2Context failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
2021-12-07T10:49:15+11:00 ✘  failed to deploy context 'T2Context' error="exit status 1"
2021-12-07T10:49:15+11:00 ✘   error="one or more contexts failed to deploy"
Error: an error occurred invoking 'context deploy'
with variables: {contexts:[T2Context] deployAll:false}
caused by: one or more contexts failed to deploy

Expected Behavior

This to not happen

Actual Behavior

This happens

Additional Context

Operating System:
AGC Version: 1.1.2
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Zip file should nest all files in a single folder

Running the following command

$ unzip amazon-genomics-cli.zip
$ ls -lsrth
total 96M
4.0K -rwxr-xr-x 1 alexiswl alexiswl  323 Oct  2 07:47 uninstall.sh
4.0K -rwxr-xr-x 1 alexiswl alexiswl 1.3K Oct  2 07:47 install.sh
4.0K drwxr-xr-x 6 alexiswl alexiswl 4.0K Oct  2 07:47 examples
  8M -rwxr-xr-x 1 alexiswl alexiswl  28M Oct  2 07:51 agc
26M -rwxr-xr-x 1 alexiswl alexiswl  26M Oct  2 07:52 agc-amd64
22M -rwxr-xr-x 1 alexiswl alexiswl  22M Oct  2 07:52 agc.exe
  20K -rw-r--r-- 1 alexiswl alexiswl  17K Oct  2 07:52 cdk.tgz
160K -rw-r--r-- 1 alexiswl alexiswl 158K Oct  2 07:52 THIRD-PARTY
 21M -rw-r--r-- 1 alexiswl alexiswl  21M Oct  2 12:35 amazon-genomics-cli.zip

Ideally all of this would be inside a folder amazon-genomics-cli instead.
See http://www.linfo.org/tarbomb.html for more information.

Document manifest file

Description

Documentation explaining the manifest file and its contents specifically would be helpful. It's currently mentioned incidentally on a few pages (e.g. here), but has no dedicated section.

Use Case

I want to write a manifest, but am unsure how

Proposed Solution

Description of all the keys in the manifest file and their meaning, in a dedicated page of the documentation.

maxVCpus property doesn't change the default cluster size.

Describe the Bug

Possible regression. maxVCpus property doesn't change the default cluster size.

  spotCtx:
    requestSpotInstances: true
    maxVCpus: 400
    engines:
      - type: wdl
        engine: miniwdl

Should give a cluster with 400 max but only gives 256.

Steps to Reproduce

Using version 1.1.1

  1. Deploy a context configured as above.
  2. Examine cluster size in AWS Batch console

Relevant Logs

Expected Behavior

Should give a cluster with 400 max vCPU

Actual Behavior

Generates a cluster with the default 256 vCPU

Screenshots

Additional Context

Operating System: Mac OSX 11.6.1
AGC Version: 1.1.1
Was AGC setup with a custom bucket: No
Was AGC setup with a custom VPC: No

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.