Giter Club home page Giter Club logo

chip-seq-pipeline2's Introduction

ENCODE Transcription Factor and Histone ChIP-Seq processing pipeline

CircleCI

Introduction

This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor and histone ChIP-seq pipeline specifications (by Anshul Kundaje) in this google doc.

Features

  • Portability: The pipeline run can be performed across different cloud platforms such as Google, AWS and DNAnexus, as well as on cluster engines such as SLURM, SGE and PBS.
  • User-friendly HTML report: In addition to the standard outputs, the pipeline generates an HTML report that consists of a tabular representation of quality metrics including alignment/peak statistics and FRiP along with many useful plots (IDR/cross-correlation measures). An example of the HTML report. The json file used in generating this report.
  • Supported genomes: Pipeline needs genome specific data such as aligner indices, chromosome sizes file and blacklist. We provide a genome database downloader/builder for hg38, hg19, mm10, mm9. You can also use this builder to build genome database from FASTA for your custom genome.

Installation

  1. Install Caper (Python Wrapper/CLI for Cromwell).

    $ pip install caper
  2. IMPORTANT: Read Caper's README carefully to choose a backend for your system. Follow the instruction in the configuration file.

    # backend: local or your HPC type (e.g. slurm, sge, pbs, lsf). read Caper's README carefully.
    $ caper init [YOUR_BACKEND]
    
    # IMPORTANT: edit the conf file and follow commented instructions in there
    $ vi ~/.caper/default.conf
  3. Git clone this pipeline.

    $ cd
    $ git clone https://github.com/ENCODE-DCC/chip-seq-pipeline2
  4. Define test input JSON.

    INPUT_JSON="https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json"
  5. If you have Docker and want to run pipelines locally on your laptop. --max-concurrent-tasks 1 is to limit number of concurrent tasks to test-run the pipeline on a laptop. Uncomment it if run it on a workstation/HPC.

    # check if Docker works on your machine
    $ docker run ubuntu:latest echo hello
    
    # --max-concurrent-tasks 1 is for computers with limited resources
    $ caper run chip.wdl -i "${INPUT_JSON}" --docker --max-concurrent-tasks 1
  6. Otherwise, install Singularity on your system. Please follow this instruction to install Singularity on a Debian-based OS. Or ask your system administrator to install Singularity on your HPC.

    # check if Singularity works on your machine
    $ singularity exec docker://ubuntu:latest echo hello
    
    # on your local machine (--max-concurrent-tasks 1 is for computers with limited resources)
    $ caper run chip.wdl -i "${INPUT_JSON}" --singularity --max-concurrent-tasks 1
    
    # on HPC, make sure that Caper's conf ~/.caper/default.conf is correctly configured to work with your HPC
    # the following command will submit Caper as a leader job to SLURM with Singularity
    $ caper hpc submit chip.wdl -i "${INPUT_JSON}" --singularity --leader-job-name ANY_GOOD_LEADER_JOB_NAME
    
    # check job ID and status of your leader jobs
    $ caper hpc list
    
    # cancel the leader node to close all of its children jobs
    # If you directly use cluster command like scancel or qdel then
    # child jobs will not be terminated
    $ caper hpc abort [JOB_ID]
  7. (Optional Conda method) WE DO NOT HELP USERS FIX CONDA DEPENDENCY ISSUES. IF CONDA METHOD FAILS THEN PLEASE USE SINGULARITY METHOD INSTEAD. DO NOT USE A SHARED CONDA. INSTALL YOUR OWN MINICONDA3 AND USE IT.

    # check if you are not using a shared conda, if so then delete it or remove it from your PATH
    $ which conda
    
    # uninstall pipeline's old environments
    $ bash scripts/uninstall_conda_env.sh
    
    # install new envs, you need to run this for every pipeline version update.
    # it may be killed if you run this command line on a login node on HPC.
    # it's recommended to make an interactive node with enough resources and run it there.
    $ bash scripts/install_conda_env.sh
    
    # if installation fails please use Singularity method instead.
    
    # on your local machine (--max-concurrent-tasks 1 is for computers with limited resources)
    $ caper run chip.wdl -i "${INPUT_JSON}" --conda --max-concurrent-tasks 1
    
    # on HPC, make sure that Caper's conf ~/.caper/default.conf is correctly configured to work with your HPC
    # the following command will submit Caper as a leader job to SLURM with Conda
    $ caper hpc submit chip.wdl -i "${INPUT_JSON}" --conda --leader-job-name ANY_GOOD_LEADER_JOB_NAME
    
    # check job ID and status of your leader jobs
    $ caper hpc list
    
    # cancel the leader node to close all of its children jobs
    # If you directly use cluster command like scancel or qdel then
    # child jobs will not be terminated
    $ caper hpc abort [JOB_ID]

Input JSON file

IMPORTANT: DO NOT BLINDLY USE A TEMPLATE/EXAMPLE INPUT JSON. READ THROUGH THE FOLLOWING GUIDE TO MAKE A CORRECT INPUT JSON FILE.

An input JSON file specifies all the input parameters and files that are necessary for successfully running this pipeline. This includes a specification of the path to the genome reference files and the raw data fastq file. Please make sure to specify absolute paths rather than relative paths in your input JSON files.

  1. Input JSON file specification (short)
  2. Input JSON file specification (long)

Running on Terra/Anvil (using Dockstore)

Visit our pipeline repo on Dockstore. Click on Terra or Anvil. Follow Terra's instruction to create a workspace on Terra and add Terra's billing bot to your Google Cloud account.

Download this test input JSON for Terra and upload it to Terra's UI and then run analysis.

If you want to use your own input JSON file, then make sure that all files in the input JSON are on a Google Cloud Storage bucket (gs://). URLs will not work.

Running on DNAnexus (using Dockstore)

Sign up for a new account on DNAnexus and create a new project on either AWS or Azure. Visit our pipeline repo on Dockstore. Click on DNAnexus. Choose a destination directory on your DNAnexus project. Click on Submit and visit DNAnexus. This will submit a conversion job so that you can check status of it on Monitor on DNAnexus UI.

Once conversion is done download one of the following input JSON files according to your chosen platform (AWS or Azure) for your DNAnexus project:

You cannot use these input JSON files directly. Go to the destination directory on DNAnexus and click on the converted workflow chip. You will see input file boxes in the left-hand side of the task graph. Expand it and define FASTQs (fastq_repX_R1 and fastq_repX_R1) and genome_tsv as in the downloaded input JSON file. Click on the common task box and define other non-file pipeline parameters. e.g. pipeline_type, paired_end and ctl_paired_end.

We have a separate project on DNANexus to provide example FASTQs and genome_tsv for hg38 and mm10 (also chr19-only version of those two. Use chr19-only versions for testing). We recommend to make copies of these directories on your own project.

genome_tsv

Example FASTQs

Running on DNAnexus (using our pre-built workflows)

See this for details.

Running and sharing on Truwl

You can run this pipeline on truwl.com. This provides a web interface that allows you to define inputs and parameters, run the job on GCP, and monitor progress. To run it you will need to create an account on the platform then request early access by emailing [email protected] to get the right permissions. You can see the example cases from this repo at https://truwl.com/workflows/instance/WF_dd6938.8f.340f/command and https://truwl.com/workflows/instance/WF_dd6938.8f.8aa3/command. The example jobs (or other jobs) can be forked to pre-populate the inputs for your own job.

If you do not run the pipeline on Truwl, you can still share your use-case/job on the platform by getting in touch at [email protected] and providing your inputs.json file.

How to organize outputs

Install Croo. You can skip this installation if you have installed pipeline's Conda environment and activated it. Make sure that you have python3(> 3.4.1) installed on your system. Find a metadata.json on Caper's output directory.

$ pip install croo
$ croo [METADATA_JSON_FILE]

How to make a spreadsheet of QC metrics

Install qc2tsv. Make sure that you have python3(> 3.4.1) installed on your system.

Once you have organized output with Croo, you will be able to find pipeline's final output file qc/qc.json which has all QC metrics in it. Simply feed qc2tsv with multiple qc.json files. It can take various URIs like local path, gs:// and s3://.

$ pip install qc2tsv
$ qc2tsv /sample1/qc.json gs://sample2/qc.json s3://sample3/qc.json ... > spreadsheet.tsv

QC metrics for each experiment (qc.json) will be split into multiple rows (1 for overall experiment + 1 for each bio replicate) in a spreadsheet.

Troubleshooting

See this document for troubleshooting. I will keep updating this document for errors reported by users.

chip-seq-pipeline2's People

Contributors

annashcherbina avatar ksebby avatar leepc12 avatar mauriziopaul avatar meenakshikagda avatar strattan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chip-seq-pipeline2's Issues

resumer.py fails on metadata.json generated for histone chipseq run

@leepc12 The pipeline is run on sherlock.

  1. The original json input file is:

/oak/stanford/groups/akundaje/projects/GECCO/scacheri_46_h3k27ac_chipseq/output_hg19/589N/589N.json

The resulting metadata file is:
/oak/stanford/groups/akundaje/projects/GECCO/scacheri_46_h3k27ac_chipseq/output_hg19/589N/metadata.json

The output log file is:
/oak/stanford/groups/akundaje/projects/GECCO/scacheri_46_h3k27ac_chipseq/logs_hg19/589N.o

The error log file is:
/oak/stanford/groups/akundaje/projects/GECCO/scacheri_46_h3k27ac_chipseq/logs_hg19/589N.e

(basically the job was cancelled because it hit the 24 hour time limit, I want to restart and allocate more time).

  1. I am running:
 python ~/chip-seq-pipeline2/utils/resumer/resumer.py /oak/stanford/groups/akundaje/projects/GECCO/scacheri_46_h3k27ac_chipseq/output_hg19/589N/metadata.json

The code fails with the following stack trace:

Traceback (most recent call last):
File "/home/users/annashch/chip-seq-pipeline2/utils/resumer/resumer.py", line 100, in
main()
File "/home/users/annashch/chip-seq-pipeline2/utils/resumer/resumer.py", line 86, in main
workflow_id, org_input_json, calls = parse_cromwell_metadata_json_file(args.metadata_json_file)
File "/home/users/annashch/chip-seq-pipeline2/utils/resumer/resumer.py", line 44, in parse_cromwell_metadata_json_file
org_input_json = json.loads(metadata_json['submittedFiles']['inputs'], object_pairs_hook=OrderedDict)
File "/scratch/users/annashch/miniconda3/lib/python3.6/json/init.py", line 367, in loads
return cls(**kw).decode(s)
File "/scratch/users/annashch/miniconda3/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/scratch/users/annashch/miniconda3/lib/python3.6/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 9 column 1 (char 1168)

dcc spp callpeak error

I used conda to install dependencies and tried running the pipeline with cromwell-38. I got the following error message:

[2019-06-13 12:11:16,40] [info] BackgroundConfigAsyncJobExecutionActor [ESC[38;5;2m60e7fe00ESC[0mchip.spp_pr2:0:1]: job id: 20957
[2019-06-13 12:11:16,40] [info] BackgroundConfigAsyncJobExecutionActor [ESC[38;5;2m60e7fe00ESC[0mchip.spp_pr2:0:1]: Status change from - to Done
[2019-06-13 12:12:29,96] [info] BackgroundConfigAsyncJobExecutionActor [ESC[38;5;2m60e7fe00ESC[0mchip.macs2:1:1]: Status change from WaitingForReturnCode to Done
[2019-06-13 12:12:30,64] [ESC[38;5;1merrorESC[0m] WorkflowManagerActor Workflow 60e7fe00-99a7-4c88-a4e4-e41bac251a21 failed (during ExecutingWorkflowState): Job chip.spp:0:1 exited with return code 2 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: /gpfs/fs1/data/shenlab/lw/chip-seq-pipeline2/cromwell-executions/chip/60e7fe00-99a7-4c88-a4e4-e41bac251a21/call-spp/shard-0/execution/stderr.
usage: ENCODE DCC spp callpeak [-h] [--chrsz CHRSZ] --fraglen FRAGLEN
[--cap-num-peak CAP_NUM_PEAK] --blacklist
BLACKLIST [--keep-irregular-chr] [--nth NTH]
[--out-dir OUT_DIR]
[--log-level {NOTSET,DEBUG,INFO,WARNING,CRITICAL,ERROR,CRITICAL}]
tas tas
ENCODE DCC spp callpeak: error: too few arguments

If this is a conda dependency error I can switch to singularity.

SyntaxError in early stages

On JSON files that I had run before without issues, I am now getting an error in which it appears a command is getting submitted that brings up a syntax error. Running using conda 4.6.14 with Cromwell 34, locally on qlogin. Here's the specific error:

[2019-08-14 16:43:05,68] [๏ฟฝ[38;5;1merror๏ฟฝ[0m] WorkflowManagerActor Workflow 3848d0ac-e20b-4990-9767-e7be125d9cdb failed (during ExecutingWorkflowState): Job chip.merge_fastq_ctl:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-0/execution/stderr.
   File "/net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-0/execution/write_tsv_23c0aaf69298859c0524049ae9d633b4.tmp", line 1
    /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-0/inputs/-2129304561/190422Boy_D19-3814_NA_sequence.fastq.gz
    ^
SyntaxError: invalid syntax

Job chip.merge_fastq_ctl:1:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-1/execution/stderr.
   File "/net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-1/execution/write_tsv_e811db57824b4b92a7589ec48a15739e.tmp", line 1
    /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq_ctl/shard-1/inputs/-1395404184/190711Boy_D19-7698_NA_sequence.fastq.gz
    ^
SyntaxError: invalid syntax

Job chip.merge_fastq:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq/shard-0/execution/stderr.
   File "/net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq/shard-0/execution/write_tsv_56f71273653d6e64b060b2f1ee90945c.tmp", line 1
    /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/3848d0ac-e20b-4990-9767-e7be125d9cdb/call-merge_fastq/shard-0/inputs/-650451612/190711Boy_D19-7694_NA_sequence.fastq.gz
    ^
SyntaxError: invalid syntax

Error tarball attached.
debug_84.tar.gz

How to build other genome besides human and mouse

Describe the bug
A clear and concise description of what the problem is.

OS/Platform and dependencies

  • OS or Platform: [e.g. Ubuntu 16.04, Google Cloud, Stanford Sherlock/SCG cluster, ...]
  • Cromwell/dxWDL version: [e.g. cromwell-30, dxWDL-60.2]
  • Conda version: If you have used Conda.

Attach logs
For Cromwell users only.

  1. Move to your working directory where you ran a pipeline. You should be able to find a directory named cromwell-executions/ which includes all outputs and logs for debugging.

  2. Run the following command to collect all logs. For developer's convenience, please add [ISSUE_ID] to the name of the tar ball file. This command will generate a tar ball including all debugging information.

$ find . -type f -name 'stdout' -or -name 'stderr' -or -name 'script' -or \
-name '*.qc' -or -name '*.txt' -or -name '*.log' -or -name '*.png' -or -name '*.pdf' \
| xargs tar -zcvf debug_[ISSUE_ID].tar.gz
  1. Post an issue with the tar ball (.tar.gz) attached.

WomLong error for slurm execution

Describe the bug
Running with slurm_singularity backend doesn't work for me. I have narrowed it down to this bug in cromwell:

broadinstitute/cromwell#4659

Applying the suggested workaround to the memory parameter makes it work. Just to let you know if you stumbles upon something similar

OS/Platform and dependencies

  • OS or Platform: Ubuntu 16.04
  • Cromwell/dxWDL version: cromwell 38-unknown-SNAP (cromwell --version)
  • Conda version: 4.5.12
  • singularity version: 2.6.1-dist

Attach error logs
Same/similar as reported in the cromwell bug above

  1. (OPTIONAL) Run the following command to collect all logs. For developer's convenience, please add [ISSUE_ID] to the name of the tar ball file. This command will generate a tar ball including all debugging information. Post an issue with the tar ball (.tar.gz) attached.
$ find . -type f -name 'stdout' -or -name 'stderr' -or -name 'script' -or \
-name '*.qc' -or -name '*.txt' -or -name '*.log' -or -name '*.png' -or -name '*.pdf' \
| xargs tar -zcvf debug_[ISSUE_ID].tar.gz

call-reproducibility_overlap step is failing

Hi,

I am running in an issue on one of the last steps during the pipeline. It seems that there are files missing:

chip/1f86b29f-f80b-4bb5-abf4-a5201fb58c05/call-reproducibility_overlap/execution

stderr

ln: accessing 'optimal_peak.gz': No such file or directory
ln: accessing 'conservative_peak.gz': No such file or directory

The following files are in the directory:

image

Any idea what is wrong here?

Thanks a lot!

Best,
Florian

Pipeline issue with call-bwa step

Describe the bug
Whenever I run the pipeline using raw fastq files as the input I get this error:
Exception: bwa index does not exists. Prefix = ~/cromwell-executions/chip/621c36a5-68e0-4ad4-b201-b5c477fb5cfc/call-bwa/shard-0/inputs/1499526/null
ln: failed to access โ€˜.bamโ€™: No such file or directory
ln: failed to access โ€˜
.baiโ€™: No such file or directory
ln: failed to access โ€˜*.flagstat.qcโ€™: No such file or directory

OS/Platform and dependencies

  • CentOS 7.5, running SLURM scheduler
  • cromwell-34
  • Conda 4.5.11

Attach logs
debug_33.tar.gz

results are not consistent from xcor analysis

Hi,
Thanks for making this great pipeline!

I ran the pipeline three times with the same data but get the very different results from xcor analysis, especially on the value of RSC.

I have two replicates for the ChIP samples. The first time I used the fastq files as input and got the RSC as 0.58 and 0.69, respectively, which indicated the low quality of libraries but I don't believe it at all since other QC metrics look good. The second time I used the bam removing the duplicates (nodup_bam gotten from the first time) as input and reran the xcor analysis in your pipeline and got the RSC as 1.08 and 1.07, which is expected. Then, I ran the xcor analysis again and got the RSC around 1.08 for both samples. I know your pipeline integrated the Phantompeakqualtools for xcor analysis which uses the 15M randomly selected reads as input and the variation from the reads selection would affect the results. But I didn't expect the results were so inconsistent! It is possible that people would make wrong decision about the library quality. So what do you think about this issue? Is there any way to avoid this? Increase the number of reads as input? Many thanks.

BTW, I attached the figures generated from xcor analysis in case you would like to take a look.

1st_sample2.pdf
1st_sample1.pdf
2nd_sample1.pdf
2nd_sample2.pdf

Bad output 'macs2.bfilt_npeak': Failed to find index Success(WomInteger(0)) on array

I have run this pipeline with deduplicated bam as input and that worked. However, when I ran with fastq.gz as input, I encounter following error:

[2018-10-10 15:34:55,23] [info] Running with database db.url = jdbc:hsqldb:mem:ecd30a73-9553-4e46-8424-5f4227afb55b;shutdown=false;hsqldb.tx=mvcc
[2018-10-10 15:35:05,14] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2018-10-10 15:35:05,16] [info] [RenameWorkflowOptionsInMetadata] 100%
[2018-10-10 15:35:05,26] [info] Running with database db.url = jdbc:hsqldb:mem:1170c126-8f24-448b-99cb-8c3254c2f7dc;shutdown=false;hsqldb.tx=mvcc
[2018-10-10 15:35:05,66] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2018-10-10 15:35:05,66] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Couldn't find a suitable DSN, defaulting to a Noop one.
[2018-10-10 15:35:05,67] [info] Using noop to send events.
[2018-10-10 15:35:05,97] [info] Slf4jLogger started
[2018-10-10 15:35:06,27] [info] Workflow heartbeat configuration:
{
"cromwellId" : "cromid-bac7c2f",
"heartbeatInterval" : "2 minutes",
"ttl" : "10 minutes",
"writeBatchSize" : 10000,
"writeThreshold" : 10000
}
[2018-10-10 15:35:06,42] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-10-10 15:35:06,45] [info] Metadata summary refreshing every 2 seconds.
[2018-10-10 15:35:06,47] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-10-10 15:35:06,52] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2018-10-10 15:35:07,62] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2018-10-10 15:35:07,66] [info] SingleWorkflowRunnerActor: Version 34
[2018-10-10 15:35:07,68] [info] SingleWorkflowRunnerActor: Submitting workflow
[2018-10-10 15:35:07,72] [info] PAPIQueryManager Running with 3 workers
[2018-10-10 15:35:07,72] [info] JES batch polling interval is 33333 milliseconds
[2018-10-10 15:35:07,72] [info] JES batch polling interval is 33333 milliseconds
[2018-10-10 15:35:07,73] [info] JES batch polling interval is 33333 milliseconds
[2018-10-10 15:35:07,82] [info] Unspecified type (Unspecified version) workflow f99b0390-cc7a-43a0-a7fa-294cf7d512af submitted
[2018-10-10 15:35:07,88] [info] SingleWorkflowRunnerActor: Workflow submitted ๏ฟฝ[38;5;2mf99b0390-cc7a-43a0-a7fa-294cf7d512af๏ฟฝ[0m
[2018-10-10 15:35:07,88] [info] 1 new workflows fetched
[2018-10-10 15:35:07,88] [info] WorkflowManagerActor Starting workflow ๏ฟฝ[38;5;2mf99b0390-cc7a-43a0-a7fa-294cf7d512af๏ฟฝ[0m
[2018-10-10 15:35:07,90] [info] WorkflowManagerActor Successfully started WorkflowActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af
[2018-10-10 15:35:07,90] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2018-10-10 15:35:07,91] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2018-10-10 15:35:07,92] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2018-10-10 15:35:08,03] [info] MaterializeWorkflowDescriptorActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Parsing workflow as WDL draft-2
[2018-10-10 15:35:39,69] [info] MaterializeWorkflowDescriptorActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Call-to-Backend assignments: chip.overlap_ppr -> slurm, chip.bam2ta -> slurm, chip.qc_report -> slurm, chip.filter_ctl -> slurm, chip.idr_ppr -> slurm, chip.macs2_ppr2 -> slurm, chip.choose_ctl -> slurm, chip.fingerprint -> slurm, chip.xcor -> slurm, chip.spp_pooled -> slurm, chip.overlap_pr -> slurm, chip.spp_ppr1 -> slurm, chip.bam2ta_no_filt -> slurm, chip.macs2 -> slurm, chip.pool_ta_ctl -> slurm, chip.spp -> slurm, chip.merge_fastq_ctl -> slurm, chip.fraglen_mean -> slurm, chip.idr -> slurm, chip.pool_ta -> slurm, chip.overlap -> slurm, chip.read_genome_tsv -> slurm, chip.reproducibility_overlap -> slurm, chip.filter -> slurm, chip.macs2_pr1 -> slurm, chip.merge_fastq -> slurm, chip.idr_pr -> slurm, chip.spr -> slurm, chip.bam2ta_ctl -> slurm, chip.macs2_pr2 -> slurm, chip.macs2_ppr1 -> slurm, chip.macs2_pooled -> slurm, chip.pool_ta_pr2 -> slurm, chip.spp_pr1 -> slurm, chip.bwa -> slurm, chip.bam2ta_no_filt_R1 -> slurm, chip.spp_ppr2 -> slurm, chip.bwa_R1 -> slurm, chip.trim_fastq -> slurm, chip.bwa_ctl -> slurm, chip.pool_ta_pr1 -> slurm, chip.spp_pr2 -> slurm, chip.reproducibility_idr -> slurm
[2018-10-10 15:35:39,84] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,85] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,86] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [preemptible, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:39,87] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] slurm [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Key/s [disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-10-10 15:35:42,12] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.read_genome_tsv
[2018-10-10 15:35:42,13] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition met: '!align_only && !true_rep_only'. Running conditional section
[2018-10-10 15:35:42,13] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition met: '!true_rep_only'. Running conditional section
[2018-10-10 15:35:42,74] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 15:35:43,20] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'enable_idr'. Bypassing conditional section
[2018-10-10 15:35:43,22] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'peak_caller_ == "spp"'. Bypassing conditional section
[2018-10-10 15:35:43,22] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition met: 'peak_caller_ == "macs2"'. Running conditional section
[2018-10-10 15:35:43,22] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!align_only && !true_rep_only && enable_idr'. Bypassing conditional section
[2018-10-10 15:35:43,22] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'enable_idr'. Bypassing conditional section
[2018-10-10 15:35:43,22] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'peak_caller_ == "spp"'. Bypassing conditional section
[2018-10-10 15:35:43,41] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: ๏ฟฝ[38;5;5mcat /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-read_genome_tsv/inputs/58336912/hg19.tsv๏ฟฝ[0m
[2018-10-10 15:35:43,49] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_read_genome_tsv
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-read_genome_tsv
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-read_genome_tsv/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-read_genome_tsv/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=4000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-read_genome_tsv/execution/script"
[2018-10-10 15:35:46,30] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.merge_fastq_ctl, chip.merge_fastq
[2018-10-10 15:35:46,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: job id: 28445729
[2018-10-10 15:35:46,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:35:46,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 15:35:46,65] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 15:35:46,72] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/inputs/661839578/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.fastq.gz: Invalid cross-device link
[2018-10-10 15:35:46,72] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/inputs/661839578/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.fastq.gz: Invalid cross-device link
[2018-10-10 15:35:46,74] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/inputs/661839578/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R2.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R2.fastq.gz: Invalid cross-device link
[2018-10-10 15:35:46,74] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/inputs/661839578/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R2.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R2.fastq.gz: Invalid cross-device link
[2018-10-10 15:35:46,79] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_merge_fastq.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/execution/write_tsv_82e931bc7d1a525bafe0ce1331b4b615.tmp
--paired-end
--nth 2๏ฟฝ[0m
[2018-10-10 15:35:46,80] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_merge_fastq.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/execution/write_tsv_5318eadf4f1e11ea3b195ca44b819fac.tmp
--paired-end
--nth 2๏ฟฝ[0m
[2018-10-10 15:35:46,81] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_merge_fastq
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=12000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq/shard-0/execution/script"
[2018-10-10 15:35:46,83] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_merge_fastq_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=12000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-merge_fastq_ctl/shard-0/execution/script"
[2018-10-10 15:35:51,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: job id: 28445734
[2018-10-10 15:35:51,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: job id: 28445732
[2018-10-10 15:35:51,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:35:51,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:37:11,58] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 15:37:17,02] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.trim_fastq
[2018-10-10 15:37:17,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 15:37:17,69] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0/inputs/-567840873/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.fastq.gz: Invalid cross-device link
[2018-10-10 15:37:17,70] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_trim_fastq.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0/inputs/-567840873/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.fastq.gz
--trim-bp 50๏ฟฝ[0m
[2018-10-10 15:37:17,74] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_trim_fastq
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=8000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-trim_fastq/shard-0/execution/script"
[2018-10-10 15:37:21,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: job id: 28445811
[2018-10-10 15:37:21,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:37:21,71] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.merge_fastq_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 15:37:42,59] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.read_genome_tsv:NA:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 15:37:45,57] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.bwa, chip.bwa_ctl
[2018-10-10 15:37:45,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: Unrecognized runtime attribute keys: preemptible, disks
[2018-10-10 15:37:45,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: Unrecognized runtime attribute keys: preemptible, disks
[2018-10-10 15:37:45,72] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/inputs/44672371/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.fastq.gz: Invalid cross-device link
[2018-10-10 15:37:45,73] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/inputs/-567840873/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.fastq.gz: Invalid cross-device link
[2018-10-10 15:37:45,74] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/inputs/44672371/merge_fastqs_R2_SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R2.merged.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R2.fastq.gz: Invalid cross-device link
[2018-10-10 15:37:45,74] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bwa.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/inputs/-1093316416/male.hg19.fa.tar
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/inputs/44672371/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.fastq.gz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/inputs/44672371/merge_fastqs_R2_SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R2.merged.fastq.gz
--paired-end
--nth 4๏ฟฝ[0m
[2018-10-10 15:37:45,75] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] Localization via hard link has failed: /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/inputs/-567840873/merge_fastqs_R2_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R2.merged.fastq.gz -> /scratch/users/mkmwong/RbKO_WT_ChIP/allFastq/original/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R2.fastq.gz: Invalid cross-device link
[2018-10-10 15:37:45,76] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bwa.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/inputs/-1093316416/male.hg19.fa.tar
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/inputs/-567840873/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.fastq.gz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/inputs/-567840873/merge_fastqs_R2_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R2.merged.fastq.gz
--paired-end
--nth 4๏ฟฝ[0m
[2018-10-10 15:37:45,78] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bwa_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/execution/stderr
-t 2880
-n 1
--ntasks-per-node=1
--cpus-per-task=4
--mem=20000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_ctl/shard-0/execution/script"
[2018-10-10 15:37:45,79] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bwa
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/execution/stderr
-t 2880
-n 1
--ntasks-per-node=1
--cpus-per-task=4
--mem=20000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa/shard-0/execution/script"
[2018-10-10 15:37:51,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: job id: 28445834
[2018-10-10 15:37:51,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: job id: 28445836
[2018-10-10 15:37:51,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:37:51,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 15:49:55,18] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.trim_fastq:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 15:50:00,71] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.bwa_R1
[2018-10-10 15:50:01,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: Unrecognized runtime attribute keys: preemptible, disks
[2018-10-10 15:50:01,74] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bwa.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0/inputs/-1093316416/male.hg19.fa.tar
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0/inputs/913187384/merge_fastqs_R1_SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.trim_50bp.fastq.gz

--nth 4๏ฟฝ[0m
[2018-10-10 15:50:01,77] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bwa_R1
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0/execution/stderr
-t 2880
-n 1
--ntasks-per-node=1
--cpus-per-task=4
--mem=20000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bwa_R1/shard-0/execution/script"
[2018-10-10 15:50:06,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: job id: 28447299
[2018-10-10 15:50:06,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 16:34:37,99] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_R1:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 16:34:40,06] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.bam2ta_no_filt_R1
[2018-10-10 16:34:40,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 16:34:40,71] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bam2ta.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt_R1/shard-0/inputs/-55055628/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.trim_50bp.bam

--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0 \๏ฟฝ[0m
[2018-10-10 16:34:40,75] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bam2ta_no_filt_R1
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt_R1/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt_R1/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt_R1/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=10000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt_R1/shard-0/execution/script"
[2018-10-10 16:34:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: job id: 28451098
[2018-10-10 16:34:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 16:40:22,98] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt_R1:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 20:41:30,97] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 20:41:36,17] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.filter, chip.bam2ta_no_filt
[2018-10-10 20:41:36,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 20:41:36,69] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 20:41:36,75] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_filter.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter/shard-0/inputs/1119499152/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.bam
--paired-end

--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null๏ฟฝ[0m
[2018-10-10 20:41:36,76] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bam2ta.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt/shard-0/inputs/1119499152/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.bam
--paired-end
--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0 \๏ฟฝ[0m
[2018-10-10 20:41:36,79] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bam2ta_no_filt
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=10000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_no_filt/shard-0/execution/script"
[2018-10-10 20:41:36,81] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_filter
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter/shard-0/execution/stderr
-t 1440
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=20000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter/shard-0/execution/script"
[2018-10-10 20:41:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: job id: 28469190
[2018-10-10 20:41:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: job id: 28469189
[2018-10-10 20:41:41,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 20:41:41,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 21:26:40,43] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_no_filt:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 22:34:18,23] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 22:34:23,63] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.bam2ta
[2018-10-10 22:34:24,62] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 22:34:24,70] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bam2ta.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta/shard-0/inputs/-1132106790/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.bam
--paired-end
--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0 \๏ฟฝ[0m
[2018-10-10 22:34:24,73] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bam2ta
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=10000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta/shard-0/execution/script"
[2018-10-10 22:34:26,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: job id: 28473079
[2018-10-10 22:34:26,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 23:00:31,64] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!true_rep_only && length(tas_) > 1 && peak_caller_ == "macs2"'. Bypassing conditional section
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!true_rep_only && length(tas_) > 1'. Bypassing conditional section
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'length(tas_) > 1 && peak_caller_ == "spp"'. Bypassing conditional section
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'length(tas_) > 1'. Bypassing conditional section
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!true_rep_only && length(tas_) > 1 && peak_caller_ == "spp"'. Bypassing conditional section
[2018-10-10 23:00:34,78] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!true_rep_only && length(tas_) > 1 && peak_caller_ == "spp"'. Bypassing conditional section
[2018-10-10 23:00:34,79] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!true_rep_only && length(tas_) > 1 && peak_caller_ == "macs2"'. Bypassing conditional section
[2018-10-10 23:00:34,79] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: 'length(tas_) > 1'. Bypassing conditional section
[2018-10-10 23:00:36,84] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.xcor, chip.spr
[2018-10-10 23:00:37,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 23:00:37,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 23:00:37,73] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_spr.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-spr/shard-0/inputs/-922975244/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.tagAlign.gz
--paired-end๏ฟฝ[0m
[2018-10-10 23:00:37,73] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_xcor.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-xcor/shard-0/inputs/-155087896/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.trim_50bp.tagAlign.gz

--subsample 15000000
--nth 2๏ฟฝ[0m
[2018-10-10 23:00:37,76] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_spr
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-spr/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-spr/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-spr/shard-0/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=16000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-spr/shard-0/execution/script"
[2018-10-10 23:00:37,77] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_xcor
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-xcor/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-xcor/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-xcor/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=16000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-xcor/shard-0/execution/script"
[2018-10-10 23:00:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: job id: 28475398
[2018-10-10 23:00:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: job id: 28475397
[2018-10-10 23:00:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 23:00:41,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 23:06:01,11] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.spr:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 23:20:11,72] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.xcor:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-10 23:20:15,40] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.fraglen_mean
[2018-10-10 23:20:15,63] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: Unrecognized runtime attribute keys: disks
[2018-10-10 23:20:15,66] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: ๏ฟฝ[38;5;5mpython <<CODE
arr = [-5]
if len(arr):
sum_ = sum(arr)
mean_ = sum(arr)/float(len(arr))
print(int(round(mean_)))
else:
print(0)
CODE๏ฟฝ[0m
[2018-10-10 23:20:15,69] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_fraglen_mean
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-fraglen_mean
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-fraglen_mean/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-fraglen_mean/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=4000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-fraglen_mean/execution/script"
[2018-10-10 23:20:21,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: job id: 28476124
[2018-10-10 23:20:21,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-10 23:22:22,12] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.fraglen_mean:NA:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 00:22:07,64] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bwa_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 00:22:13,31] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.filter_ctl
[2018-10-11 00:22:13,63] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 00:22:13,71] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_filter.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter_ctl/shard-0/inputs/42388076/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.bam
--paired-end

--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null๏ฟฝ[0m
[2018-10-11 00:22:13,75] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_filter_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter_ctl/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter_ctl/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter_ctl/shard-0/execution/stderr
-t 1440
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=20000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-filter_ctl/shard-0/execution/script"
[2018-10-11 00:22:16,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: job id: 28479287
[2018-10-11 00:22:16,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:16:26,86] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.filter_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:16:30,91] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition NOT met: '!no_jsd && length(nodup_bams_) > 0 && length(ctl_nodup_bams_) > 0 && basename(blacklist) != "null"'. Bypassing conditional section
[2018-10-11 03:16:32,95] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.bam2ta_ctl
[2018-10-11 03:16:33,62] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:16:33,70] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_bam2ta.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_ctl/shard-0/inputs/-1207382602/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.nodup.bam
--paired-end
--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0 \๏ฟฝ[0m
[2018-10-11 03:16:33,73] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_bam2ta_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_ctl/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_ctl/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_ctl/shard-0/execution/stderr
-t 360
-n 1
--ntasks-per-node=1
--cpus-per-task=2
--mem=10000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-bam2ta_ctl/shard-0/execution/script"
[2018-10-11 03:16:36,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: job id: 28483144
[2018-10-11 03:16:36,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:47:39,16] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.bam2ta_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:47:42,63] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition met: 'length(tas_) > 0 && length(ctl_tas_) > 0'. Running conditional section
[2018-10-11 03:47:42,63] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Condition met: 'length(ctl_tas_) > 0'. Running conditional section
[2018-10-11 03:47:44,66] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.pool_ta_ctl
[2018-10-11 03:47:45,62] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:47:45,68] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: ๏ฟฝ[38;5;5mpython $(which encode_pool_ta.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-pool_ta_ctl/inputs/1500631376/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.nodup.tagAlign.gz๏ฟฝ[0m
[2018-10-11 03:47:45,71] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_pool_ta_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-pool_ta_ctl
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-pool_ta_ctl/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-pool_ta_ctl/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=4000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-pool_ta_ctl/execution/script"
[2018-10-11 03:47:46,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: job id: 28485165
[2018-10-11 03:47:46,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:50:16,49] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.pool_ta_ctl:NA:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:50:18,63] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.choose_ctl
[2018-10-11 03:50:19,62] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:50:19,74] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: ๏ฟฝ[38;5;5mpython $(which encode_choose_ctl.py)
--tas /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/inputs/-922975244/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.tagAlign.gz
--ctl-tas /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/inputs/1500631376/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.nodup.tagAlign.gz

--ctl-ta-pooled /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/inputs/311943167/SCGPM_RbWTK9K27_HTYFT_L7_CTTGTA_R1.merged.nodup.tagAlign.gz
--always-use-pooled-ctl
--ctl-depth-ratio 1.2๏ฟฝ[0m
[2018-10-11 03:50:19,78] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_choose_ctl
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/execution/stderr
-t 60
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=8000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-choose_ctl/execution/script"
[2018-10-11 03:50:21,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: job id: 28485274
[2018-10-11 03:50:21,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:52:47,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.choose_ctl:NA:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:52:51,57] [info] WorkflowExecutionActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0m]: Starting chip.macs2, chip.macs2_pr2, chip.macs2_pr1
[2018-10-11 03:52:51,63] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:52:51,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:52:51,64] [๏ฟฝ[38;5;220mwarn๏ฟฝ[0m] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: Unrecognized runtime attribute keys: disks
[2018-10-11 03:52:52,44] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_macs2_chip.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/inputs/-922975244/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.tagAlign.gz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/inputs/-1900987009/ctl_for_rep1.tagAlign.gz
--gensz hs
--chrsz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/inputs/58336912/hg19.chrom.sizes
--fraglen -5
--cap-num-peak 500000
--pval-thresh 0.01
--make-signal
--blacklist /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/inputs/58336912/wgEncodeDacMapabilityConsensusExcludable.bed.gz

touch null # ugly part to deal with optional outputs๏ฟฝ[0m
[2018-10-11 03:52:52,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_macs2_chip.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/inputs/-303265875/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.pr2.tagAlign.gz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/inputs/-1900987009/ctl_for_rep1.tagAlign.gz
--gensz hs
--chrsz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/inputs/58336912/hg19.chrom.sizes
--fraglen -5
--cap-num-peak 500000
--pval-thresh 0.01

--blacklist /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/inputs/58336912/wgEncodeDacMapabilityConsensusExcludable.bed.gz

touch null.pval.signal.bigwig null.fc.signal.bigwig
touch null # ugly part to deal with optional outputs๏ฟฝ[0m
[2018-10-11 03:52:52,47] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: ๏ฟฝ[38;5;5mpython $(which encode_macs2_chip.py)
/home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/inputs/1660355428/SCGPM_RbWTK9K27_HTYFT_L7_TAGCTT_R1.merged.nodup.pr1.tagAlign.gz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/inputs/-1900987009/ctl_for_rep1.tagAlign.gz
--gensz hs
--chrsz /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/inputs/58336912/hg19.chrom.sizes
--fraglen -5
--cap-num-peak 500000
--pval-thresh 0.01

--blacklist /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/inputs/58336912/wgEncodeDacMapabilityConsensusExcludable.bed.gz

touch null.pval.signal.bigwig null.fc.signal.bigwig
touch null # ugly part to deal with optional outputs๏ฟฝ[0m
[2018-10-11 03:52:52,48] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_macs2
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/execution/stderr
-t 1440
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=16000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2/shard-0/execution/script"
[2018-10-11 03:52:52,50] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_macs2_pr2
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/execution/stderr
-t 1440
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=16000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr2/shard-0/execution/script"
[2018-10-11 03:52:52,52] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: executing: sbatch
--export=ALL
-J cromwell_f99b0390_macs2_pr1
-D /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0
-o /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/execution/stdout
-e /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/execution/stderr
-t 1440
-n 1
--ntasks-per-node=1
--cpus-per-task=1
--mem=16000




--wrap "/bin/bash /home/groups/ashbym/mandy/chip-seq-pipeline2/cromwell-executions/chip/f99b0390-cc7a-43a0-a7fa-294cf7d512af/call-macs2_pr1/shard-0/execution/script"
[2018-10-11 03:52:56,45] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: job id: 28485366
[2018-10-11 03:52:56,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: job id: 28485368
[2018-10-11 03:52:56,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: job id: 28485367
[2018-10-11 03:52:56,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:52:56,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:52:56,46] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-10-11 03:54:05,40] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr1:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:54:19,71] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:54:30,01] [info] DispatchedConfigAsyncJobExecutionActor [๏ฟฝ[38;5;2mf99b0390๏ฟฝ[0mchip.macs2_pr2:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-10-11 03:54:30,47] [๏ฟฝ[38;5;1merror๏ฟฝ[0m] WorkflowManagerActor Workflow f99b0390-cc7a-43a0-a7fa-294cf7d512af failed (during ExecutingWorkflowState): cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'macs2_pr1.npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr1.bfilt_npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr1.bfilt_npeak_bb': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr1.frip_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1(StandardAsyncExecutionActor.scala:839)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'macs2.npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2.bfilt_npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2.bfilt_npeak_bb': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2.sig_pval': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2.sig_fc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2.frip_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1(StandardAsyncExecutionActor.scala:839)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'macs2_pr2.npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr2.bfilt_npeak': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr2.bfilt_npeak_bb': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'macs2_pr2.frip_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1(StandardAsyncExecutionActor.scala:839)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2018-10-11 03:54:30,48] [info] WorkflowManagerActor WorkflowActor-f99b0390-cc7a-43a0-a7fa-294cf7d512af is in a terminal state: WorkflowFailedState
[2018-10-11 03:54:32,98] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
[2018-10-11 03:54:36,49] [info] Workflow polling stopped
[2018-10-11 03:54:36,50] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds
[2018-10-11 03:54:36,51] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds
[2018-10-11 03:54:36,51] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds
[2018-10-11 03:54:36,51] [info] JobExecutionTokenDispenser stopped
[2018-10-11 03:54:36,51] [info] Aborting all running workflows.
[2018-10-11 03:54:36,51] [info] WorkflowStoreActor stopped
[2018-10-11 03:54:36,52] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds
[2018-10-11 03:54:36,52] [info] WorkflowLogCopyRouter stopped
[2018-10-11 03:54:36,52] [info] Connection pools shut down
[2018-10-11 03:54:36,52] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] Shutting down JobStoreActor - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] Shutting down DockerHashActor - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] Shutting down IoProxy - Timeout = 1800 seconds
[2018-10-11 03:54:36,52] [info] WorkflowManagerActor stopped
[2018-10-11 03:54:36,52] [info] DockerHashActor stopped
[2018-10-11 03:54:36,52] [info] IoProxy stopped
[2018-10-11 03:54:36,52] [info] CallCacheWriteActor Shutting down: 0 queued messages to process
[2018-10-11 03:54:36,52] [info] SubWorkflowStoreActor stopped
[2018-10-11 03:54:36,52] [info] CallCacheWriteActor stopped
[2018-10-11 03:54:36,52] [info] WorkflowManagerActor All workflows finished
[2018-10-11 03:54:36,52] [info] JobStoreActor stopped
[2018-10-11 03:54:36,53] [info] WriteMetadataActor Shutting down: 0 queued messages to process
[2018-10-11 03:54:36,53] [info] KvWriteActor Shutting down: 0 queued messages to process
[2018-10-11 03:54:36,54] [info] ServiceRegistryActor stopped
[2018-10-11 03:54:36,54] [info] Database closed
[2018-10-11 03:54:36,54] [info] Stream materializer shut down
Workflow f99b0390-cc7a-43a0-a7fa-294cf7d512af transitioned to state Failed
[2018-10-11 03:54:36,57] [info] Automatic shutdown of the async connection
[2018-10-11 03:54:36,57] [info] Gracefully shutdown sentry threads.
[2018-10-11 03:54:36,57] [info] Shutdown finished.

And here's the path after I activated conda environment:
/home/groups/ashbym/mandy/anaconda3/envs/encode-chip-seq-pipeline/bin:/home/users/mkmwong/perl5/bin:/share/software/user/open/perl/5.26.0/bin:/home/groups/ashbym/mandy/anaconda3/bin:/share/software/user/srcc/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/iSM/bin:/home/groups/ashbym/mandy/.bds:/home/groups/ashbym/mandy/anaconda3/bin/samtools:/home/groups/ashbym/mandy/bowtie2-2.3.4.3-linux-x86_64:/home/groups/ashbym/mandy/bedtools2/bin:/home/groups/ashbym/mandy/sratoolkit.2.9.2-centos_linux64/bin:/home/users/mkmwong/bin:/home/users/mkmwong/localperl/bin

Thank you !

Bad output 'qc_report.qc_json_match'

Describe the bug

This is a bug that pops up at the end of the pipeline run:

[2018-07-18 14:31:17,39] [info] PipelinesApiAsyncBackendJobExecutionActor [eca1fbc4chip.qc_report:NA:1]: Status change from Running to Success
[2018-07-18 14:31:19,33] [error] WorkflowManagerActor Workflow eca1fbc4-8be6-4401-90c4-8af43c017326 failed (during ExecutingWorkflowState): cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'qc_report.qc_json_match': java.nio.file.NoSuchFileException: gs://workflow-challenge/vanessasaurus/chip/eca1fbc4-8be6-4401-90c4-8af43c017326/call-qc_report/qc_json_match.txt
    at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1(StandardAsyncExecutionActor.scala:824)
    at scala.util.Success.$anonfun$map$1(Try.scala:251)
    at scala.util.Success.map(Try.scala:209)
    at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
    at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
    at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

OS/Platform and dependencies
Google Cloud, run via a Docker Container, per instructions here -->

https://vsoch.github.io/wdl-pipelines/pipeline-chip-seq

Attach logs
The run, by way of a container, exited after finish so there aren't any logs. This should probably be advised to do as a bind to the container so they persist when it poops out.

BWA mem instead of aln

I am currently running the pipeline on 2 x 151 bp ChIP seq datasets and I find that the BWA 'mem' works better than the 'aln' algorithm for my datasets. In particular, the mapped ratio improved from 59% to 95%, and properly paired ratio improved from 53% to 95%. The BWA documentation seems to recommend 'mem' for reads longer than 70bp and the ENCODE ChIP seq guidelines also encourage longer read lengths (more than 50bp). Will it be possible to add the BWA aligner option into the workflow?

Thanks for your consideration!

chip-seq-pipeline2-1.1.6 slurm singularity tutorial questions

Describe the bug
A clear and concise description of what the problem is.

OS/Platform and dependencies

  • OS or Platform: [e.g. Ubuntu 16.04, Google Cloud, Stanford Sherlock/SCG cluster, ...]
  • Cromwell/dxWDL version: [e.g. cromwell-34.jar, dxWDL-78.jar]
  • Conda version: If you have used Conda ($ conda --version).
  • singularity version: If you have used singularity ($ singularity --version).

Attach error logs
For Cromwell users only.

  1. Move to your working directory where you ran a pipeline. You should be able to find a directory named cromwell-executions/ which includes all outputs and logs for debugging.

  2. Run the following command line to print all non-empty STDERR outputs. This will be greatly helpful for developers to figure out the problem. Copy-paste its output to the issue page.

$ find -name stderr -not -empty | xargs tail -n +1
  1. (OPTIONAL) Run the following command to collect all logs. For developer's convenience, please add [ISSUE_ID] to the name of the tar ball file. This command will generate a tar ball including all debugging information. Post an issue with the tar ball (.tar.gz) attached.
$ find . -type f -name 'stdout' -or -name 'stderr' -or -name 'script' -or \
-name '*.qc' -or -name '*.txt' -or -name '*.log' -or -name '*.png' -or -name '*.pdf' \
| xargs tar -zcvf debug_[ISSUE_ID].tar.gz

Where to find reo1/rep2 signal track?

As topic. is it in call_macs-ppr1/2 or is it in call_macs-pr1/2? Inside all those folder, I can only fine null.fc.signal.bigwig, which is the size of 0.

JSD-related error on conda pipeline for histone ChIP-seq

I'm having a persistent JSD-related error. I'm running the pipeline on a SLURM cluster with Conda version 4.6.14 and cromwell-38. I have no control data so it seems like I can't create a JSD plot. I disabled JSD, but the error persists. Also, this error began to occur when I disabled all instances where IDR was called in an attempt to fix issue #78.

Error Logs

cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1$$anon$1: Call inp
ut and runtime attributes evaluation failed for qc_report:
Failed to evaluate input 'jsd_qcs' (reason 1 of 1): Failed to lookup input value for required input jsd_qcs
ย  ย  ย  ย  at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActo
r.scala:70)
ย  ย  ย  ย  at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActo
r.scala:66)
ย  ย  ย  ย  at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34)
ย  ย  ย  ย  at akka.actor.FSM.processEvent(FSM.scala:684)
ย  ย  ย  ย  at akka.actor.FSM.processEvent$(FSM.scala:681)
ย  ย  ย  ย  at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.processEvent(JobPreparationActor.scala:42
)
ย  ย  ย  ย  at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:678)
ย  ย  ย  ย  at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:672)
ย  ย  ย  ย  at akka.actor.Actor.aroundReceive(Actor.scala:517)
ย  ย  ย  ย  at akka.actor.Actor.aroundReceive$(Actor.scala:515)
ย  ย  ย  ย  at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.aroundReceive(JobPreparationActor.scala:4
2)
ย  ย  ย  ย  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
ย  ย  ย  ย  at akka.actor.ActorCell.invoke(ActorCell.scala:557)
ย  ย  ย  ย  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
ย  ย  ย  ย  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
ย  ย  ย  ย  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
ย  ย  ย  ย  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
ย  ย  ย  ย  at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
ย  ย  ย  ย  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
ย  ย  ย  ย  at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

This didn't show up in stderr, but the workflow failed when I got this error.

debug_79.tar.gz

Leveraging Cluster to Run Parts of Pipeline in Parallel

I am running the ChIP-seq pipeline on a slurm cluster, and I would like to use the cluster to run parts of the pipeline in parallel (like map the reads from the different replicates in parallel). Am I correct in thinking that I need to use #SBATCH -n 2 (or > 2) in order to do this? If so, it would be great if you could clarify this somewhere. The current example script says:
"do not touch these settings
number of tasks and nodes are fixed at 1"
which suggests that the pipeline will fail if -n is not set to 1.
Thanks so much!

cannot get the testing file to work

i git clone the folder
cd into the folder
dont have docker or singularity
have to use conda
bash conda/uninstall_dependencies.sh
bash conda/install_dependencies.sh
conda activate encode-chip-seq-pipeline
caper run chip.wdl -i examples/caper/ENCSR936XTK_subsampled_chr19_only.json --deepcopy

Thanks very much

metadata.txt

Failed execution with test data

debug_70.tar.gz
I've just installed the chip-seq-pipeline2. When I run the pipeline with the test data, I get an error message and the pipeline exits.

WorkflowManagerActor Workflow f7557e2f-ce80-45ca-86e4-062f8d562b37 failed (during ExecutingWorkflowState): Job chip.filter:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.

OS/Platform and dependencies

  • macOS High Sierra Version 10.13.6
  • cromwell-34.jar
    -docker 2.0.0.0-mac81

Attach error logs
$ find -name stderr -not -empty | xargs tail -n +1

This command does not work. I get the following message.

find: illegal option -- n

problem preventing xcor from running

I'm running the pipeline with one replicate and one control and I noticed that even when I include the "disable.xcor" parameter on the .json file it still tries to run it and it fails the run at the xcor step. I don't mind having to do the rest of qc stuff, I just was hoping to finish the pipeline so I won't have to do each step manually (or write my own pipeline/script). I also have the "true_reps_only" parameter set to true, do they cancel each other out?

OS/Platform and dependencies

  • CentOS 7
  • Cromwell-34
  • conda version 4.5.11

Attach logs
debug_36.tar.gz

encode_bwa.py error

Describe the bug
I failed to follow the tutorial. it look like the missing out-dir for encode_bwa.py, thanks a lot!

OS/Platform and dependencies

  • OS or Platform: CentOS 6
  • Conda version: 4.6.14
  • Caper version: 0.3.15

Attach error logs
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 214, in main
bwa_index_prefix, args.nth, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 98, in bwa_se
run_shell_cmd(cmd)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=64122, PGID=64122, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]..v

$ caper troubleshoot [WORKFLOW_ID_OR_METADATA_JSON_FILE]

[Caper] troubleshooting 3349340c-1f95-497c-8b50-9cd699c7a2ec ...
Found failures:
[
{
"causedBy": [
{
"causedBy": [],
"message": "Job chip.bwa_R1:0:2 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details."
},
{
"causedBy": [],
"message": "Job chip.bwa_R1:1:2 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details."
}
],
"message": "Workflow failed"
}
]

chip.bwa_R1 RetryableFailure. SHARD_IDX=0, RC=1, JOB_ID=70989, RUN_START=2019-07-29T15:08:22.205Z, RUN_END=2019-07-29T15:09:20.111Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-0/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-0/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 214, in main
bwa_index_prefix, args.nth, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 98, in bwa_se
run_shell_cmd(cmd)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=71241, PGID=71241, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
[bwa_read_seq] 0.0% bases are trimmed.
[bwa_aln_core] convert to sequence coordinate... 0.28 sec
[bwa_aln_core] refine gapped alignments... 0.12 sec
[bwa_aln_core] print alignments...
STDOUT=

chip.bwa_R1 Failed. SHARD_IDX=0, RC=1, JOB_ID=71425, RUN_START=2019-07-29T15:09:22.201Z, RUN_END=2019-07-29T15:10:07.018Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-0/attempt-2/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-0/attempt-2/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 214, in main
bwa_index_prefix, args.nth, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 98, in bwa_se
run_shell_cmd(cmd)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=71629, PGID=71629, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
[bwa_read_seq] 0.0% bases are trimmed.
[bwa_aln_core] convert to sequence coordinate... 0.27 sec
[bwa_aln_core] refine gapped alignments... 0.09 sec
[bwa_aln_core] print alignments...
STDOUT=

chip.bwa_R1 RetryableFailure. SHARD_IDX=1, RC=1, JOB_ID=71020, RUN_START=2019-07-29T15:08:24.207Z, RUN_END=2019-07-29T15:09:30.106Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-1/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-1/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 214, in main
bwa_index_prefix, args.nth, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 98, in bwa_se
run_shell_cmd(cmd)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=71285, PGID=71285, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
[bwa_read_seq] 0.0% bases are trimmed.
[bwa_aln_core] convert to sequence coordinate... 0.28 sec
[bwa_aln_core] refine gapped alignments... 0.11 sec
[bwa_aln_core] print alignments...
STDOUT=

chip.bwa_R1 Failed. SHARD_IDX=1, RC=1, JOB_ID=71496, RUN_START=2019-07-29T15:09:32.202Z, RUN_END=2019-07-29T15:10:24.828Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-1/attempt-2/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_R1/shard-1/attempt-2/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 214, in main
bwa_index_prefix, args.nth, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 98, in bwa_se
run_shell_cmd(cmd)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=71866, PGID=71866, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
[bwa_read_seq] 0.0% bases are trimmed.
[bwa_aln_core] convert to sequence coordinate... 0.24 sec
[bwa_aln_core] refine gapped alignments... 0.09 sec
[bwa_aln_core] print alignments...
STDOUT=

chip.bwa RetryableFailure. SHARD_IDX=0, RC=1, JOB_ID=70956, RUN_START=2019-07-29T15:08:18.210Z, RUN_END=2019-07-29T15:12:10.107Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa/shard-0/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa/shard-0/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 211, in main
bwa_index_prefix, args.nth, args.use_bwa_mem_for_pe, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 162, in bwa_pe
run_shell_cmd(cmd3)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=72252, PGID=72252, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
STDOUT=

chip.bwa RetryableFailure. SHARD_IDX=1, RC=1, JOB_ID=70923, RUN_START=2019-07-29T15:08:14.208Z, RUN_END=2019-07-29T15:12:25.104Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa/shard-1/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa/shard-1/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 211, in main
bwa_index_prefix, args.nth, args.use_bwa_mem_for_pe, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 162, in bwa_pe
run_shell_cmd(cmd3)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=72499, PGID=72499, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
STDOUT=

chip.bwa_ctl RetryableFailure. SHARD_IDX=0, RC=1, JOB_ID=70893, RUN_START=2019-07-29T15:08:10.209Z, RUN_END=2019-07-29T15:12:10.107Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_ctl/shard-0/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_ctl/shard-0/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 211, in main
bwa_index_prefix, args.nth, args.use_bwa_mem_for_pe, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 162, in bwa_pe
run_shell_cmd(cmd3)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=72162, PGID=72162, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
STDOUT=

chip.bwa_ctl RetryableFailure. SHARD_IDX=1, RC=1, JOB_ID=70872, RUN_START=2019-07-29T15:08:08.212Z, RUN_END=2019-07-29T15:11:25.106Z, STDOUT=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_ctl/shard-1/execution/stdout, STDERR=/data/kun/TF_pipeline/test/chip/3349340c-1f95-497c-8b50-9cd699c7a2ec/call-bwa_ctl/shard-1/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 246, in
main()
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 211, in main
bwa_index_prefix, args.nth, args.use_bwa_mem_for_pe, args.out_dir)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_bwa.py", line 162, in bwa_pe
run_shell_cmd(cmd3)
File "/data/kun/TF_pipeline/chip-seq-pipeline2/src/encode_common.py", line 252, in run_shell_cmd
raise Exception(err_str)
Exception: PID=72180, PGID=72180, RC=1
STDERR=[bam_sort] Use -T PREFIX / -o FILE to specify temporary and final output files
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
STDOUT=

Questions re. 1)IDR and 2)broad peaks

Hi guys,
It's more of a question - not an issue, hope that's ok:
I'm using your pipeline to look at some of the published ChIP results, and most are unreplicated. From what I noticed and understand is that the pipeline will in most cases still run the self-consistency IDR with pseudoreplicates and so on, but it doesn't always do that and sometimes returns just overlap (?) between replicate and pseudoreplicate without attempting IDR? Would it be possible for you to explain when one vs the other happens?
And second thing - what would you recommend in case of ChIPs that give 'broad' signals, like h3k27me3 seems to, or mixed like Pol2 apparently does? Is there a way to adjust the peak calling parameters etc?

Failure to find peak file

Now, running the pipeline on my actual data, one run stopped when it came to the IDR steps, with this specific error:

[2019-02-07 03:01:30,91] [๏ฟฝ[38;5;1merror๏ฟฝ[0m] WorkflowManagerActor Workflow 73108c41-3432-4ab2-9f3d-92b89dd0b538 failed (during ExecutingWorkflowState): Job chip.idr:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: /net/bmc-pub9/data/boyerlab/users/kdemuren/chip-seq-pipeline2/cromwell-executions/chip/73108c41-3432-4ab2-9f3d-92b89dd0b538/call-idr/shard-0/execution/stderr.
 Traceback (most recent call last):
  File "/home/kdemuren/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_idr.py", line 169, in <module>
    main()
  File "/home/kdemuren/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_idr.py", line 144, in main
    assert_file_not_empty(bfilt_idr_peak)
  File "/net/bmc-pub9/data/boyerlab/users/kdemuren/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_common.py", line 176, in assert_file_not_empty
    raise Exception('File is empty ({}). Help: {}'.format(f,help))
Exception: File is empty (rep1-rep2.idr0.05.bfilt.narrowPeak.gz). Help: 

With the same JSON file (but different ChIP data), I've had two successful runs, again using qlogin with memory/time allocation.

Log attached:
log_CM_Rad21.txt

Error tarball:
debug_54.tar.gz

call-qc-report fails with unequal number of samples and inputs

Hello,

The pipeline runs well with 3 replicates of sample and input files, but fails at the call-qc report step in runs with 2 replicates of sample and 3 of input.

OS/Platform and dependencies

  • CentOS release 6.8
  • Conda version 4.7.10
  • Caper version 0.3.15

Error logs

Found failures:
[
    {
        "message": "Workflow failed",
        "causedBy": [
            {
                "message": "Job chip.qc_report:NA:2 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.",
                "causedBy": []
            }
        ]
    }
]

chip.qc_report RetryableFailure. SHARD_IDX=-1, RC=1, JOB_ID=96011, RUN_START=2019-08-05T17:44:38.559Z, RUN_END=2019-08-05T17:44:53.501Z, STDOUT=/gpfs2/well/mccarthy/production/chip-seq/data/Islet_diff_2016/merged/trimmed/BLC/chip/5ece5f92-6320-459b-90b6-d522fc61871f/call-qc_report/execution/stdout, STDERR=/gpfs2/well/mccarthy/production/chip-seq/data/Islet_diff_2016/merged/trimmed/BLC/chip/5ece5f92-6320-459b-90b6-d522fc61871f/call-qc_report/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 633, in <module>
    main()
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 301, in main
    row_header.extend(get_ctl_labels(json_objs, args.ctl_paired_ends))
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 244, in get_ctl_labels
    if paired_ends[i]:
IndexError: list index out of range


chip.qc_report Failed. SHARD_IDX=-1, RC=1, JOB_ID=96048, RUN_START=2019-08-05T17:44:54.555Z, RUN_END=2019-08-05T17:45:07.690Z, STDOUT=/gpfs2/well/mccarthy/production/chip-seq/data/Islet_diff_2016/merged/trimmed/BLC/chip/5ece5f92-6320-459b-90b6-d522fc61871f/call-qc_report/attempt-2/execution/stdout, STDERR=/gpfs2/well/mccarthy/production/chip-seq/data/Islet_diff_2016/merged/trimmed/BLC/chip/5ece5f92-6320-459b-90b6-d522fc61871f/call-qc_report/attempt-2/execution/stderr
STDERR_CONTENTS=
Traceback (most recent call last):
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 633, in <module>
    main()
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 301, in main
    row_header.extend(get_ctl_labels(json_objs, args.ctl_paired_ends))
  File "/well/mccarthy/production/chip-seq/dependencies/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_qc_report.py", line 244, in get_ctl_labels
    if paired_ends[i]:
IndexError: list index out of range

Thank you for your work and your help.
Attached are the log tar files (the GOOD one is an example of successful run, while the other two threw the error described).
debug_ISSUE_83_GOOD.tar.gz
debug_ISSUE_83_B.tar.gz
debug_ISSUE_83_A.tar.gz

Job chip.macs2_ppr2:NA:1 failed

I encountered the following error during the call-macs2-ppr2 step.

==> ./call-macs2_ppr2/execution/stderr <==
Traceback (most recent call last):
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_macs2_chip.py", line 223, in
main()
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_macs2_chip.py", line 215, in main
args.chrsz, args.fraglen, args.out_dir)
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_frip.py", line 84, in frip_shifted
write_txt(frip_qc, str(float(val1)/float(val2)))
ValueError: could not convert string to float: Differing number of BED fields encountered at line: 40330138. Exiting...
5740102

==> ./call-macs2_pr2/shard-1/execution/stderr <==
Traceback (most recent call last):
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_macs2_chip.py", line 223, in
main()
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_macs2_chip.py", line 215, in main
args.chrsz, args.fraglen, args.out_dir)
File "/rbc/yixiang/anaconda2/envs/encode-chip-seq-pipeline/bin/encode_frip.py", line 84, in frip_shifted
write_txt(frip_qc, str(float(val1)/float(val2)))
ValueError: could not convert string to float: Differing number of BED fields encountered at line: 17163938. Exiting...
2522395

OS/Platform and dependencies

  • Cromwell/dxWDL version: cromwell-34.jar
  • Conda version: 4.3.30

debug_49.tar.gz

Error with Chip-Seq Singularity Pipeline at Macs2 Peak Calling

During MACS2 processing with Singularity Chip-Seq pipeline, the pipeline throws up an error. I Tried but couldn't identify the origin of the error.

Can you help me? What am I missing?

I appreciate a lot thanks!

singularity --version
2.6.1-dist

Input.json:

{
    "chip.genome_tsv" : "mm10.tsv",
    "chip.paired_end" : false,

    "chip.fastqs_rep1_R1" : [ "ChIP_R1.fastq.gz"],
    "chip.ctl_fastqs_rep1_R1" : [ "ChIP_Input_R1.fastq.gz" ],

    "chip.title" : "ChIP",
    "chip.description" : "ChIP Project",

    "chip.pipeline_type" : "tf",
    "chip.peak_caller" : "spp",

    "chip.align_only" : false,
    "chip.true_rep_only" : false,

    "chip.disable_fingerprint" : false,
    "chip.enable_count_signal_track" : false,

    "chip.xcor_pe_trim_bp" : 50,

    "chip.dup_marker" : "picard",
    "chip.mapq_thresh" : 30,
    "chip.no_dup_removal" : false,

    "chip.mito_chr_name" : "chrM",
    "chip.regex_filter_reads" : "chrM",
    "chip.subsample_reads" : 0,
    "chip.ctl_subsample_reads" : 0,
    "chip.xcor_subsample_reads" : 15000000,

    "chip.keep_irregular_chr_in_bfilt_peak" : false,
    
    "chip.always_use_pooled_ctl" : true,
    "chip.ctl_depth_ratio" : 1.2,

    "chip.macs2_cap_num_peak" : 500000,
    "chip.pval_thresh" : 0.01,
    "chip.idr_thresh" : 0.05,
    "chip.spp_cap_num_peak" : 300000,

    "chip.bwa_cpu" : 12,
    "chip.bwa_mem_mb" : 200000,
    "chip.bwa_time_hr" : 480,
    "chip.bwa_disks" : "local-disk 100 HDD",

    "chip.filter_cpu" : 12,
    "chip.filter_mem_mb" : 20000,
    "chip.filter_time_hr" : 24,
    "chip.filter_disks" : "local-disk 100 HDD",

    "chip.bam2ta_cpu" : 12,
    "chip.bam2ta_mem_mb" : 100000,
    "chip.bam2ta_time_hr" : 6,
    "chip.bam2ta_disks" : "local-disk 100 HDD",

    "chip.spr_mem_mb" : 16000,

    "chip.fingerprint_cpu" : 12,
    "chip.fingerprint_mem_mb" : 12000,
    "chip.fingerprint_time_hr" : 6,
    "chip.fingerprint_disks" : "local-disk 100 HDD",

    "chip.xcor_cpu" : 12,
    "chip.xcor_mem_mb" : 16000,
    "chip.xcor_time_hr" : 24,
    "chip.xcor_disks" : "local-disk 100 HDD",

    "chip.macs2_mem_mb" : 16000,
    "chip.macs2_time_hr" : 24,
    "chip.macs2_disks" : "local-disk 100 HDD",

    "chip.spp_cpu" : 12,
    "chip.spp_mem_mb" : 16000,
    "chip.spp_time_hr" : 72,
    "chip.spp_disks" : "local-disk 100 HDD"
}
find -name stderr -not -empty | xargs tail -n +1
Traceback (most recent call last):
  File "/software/chip-seq-pipeline/src/encode_macs2_chip.py", line 225, in <module>
    main()
  File "/software/chip-seq-pipeline/src/encode_macs2_chip.py", line 200, in main
    args.out_dir)
  File "/software/chip-seq-pipeline/src/encode_macs2_chip.py", line 88, in macs2
    run_shell_cmd(cmd0)
  File "/software/chip-seq-pipeline/src/encode_common.py", line 252, in run_shell_cmd
    raise Exception(err_str)
Exception: PID=28849, PGID=28849, RC=1
STDERR=ERROR:root:--extsize must >= 1!
STDOUT=

Mix single-end and paired-end

Can this pipeline handle a mixture of single-end and paired-end replicates? If so, what value should I put for "chip.paired_end" : true/false in the .json?

Error in Test Due to Failure to Find Genome Data File

Describe the bug
I tried running the test and it failed. I looked through the output from slurm and found that a localization via a hard link had failed. It could not find the following file: chip-seq-pipeline2/test_genome_database/hg38_chr19_chrM/hg38.chrom.sizes. I went into the chip-seq-pipeline2/test_genome_database/hg38_chr19_chrM directory, and the file did not exist. I did see a file with this name: hg38_chr19_chrM.chrom.sizes.

OS/Platform and dependencies

  • OS or Platform: Linux cluster with Ubuntu and Slurm
  • Cromwell/dxWDL version: cromwell-34.jar
  • Conda version: 4.5.11

Attach error logs
For Cromwell users only.
Run the following command line to print all non-empty STDERR outputs. This will be greatly helpful for developers to figure out the problem. Copy-paste its output to the issue page.

$ find -name stderr -not -empty | xargs tail -n +1

This did not return anything.

Failed to access โ€˜*.pr1.tagAlign.gzโ€™

I received the following error message on two separate runs of the TF pipeline (first with MACS2 peak calling on 3 WT replicate PE FASTQs, second with spp peak calling on 3 WT and 3 control PE FASTQs):

[2018-11-12 16:41:57,70] [error] WorkflowManagerActor Workflow 3466d7a1-80d8-4d4a-b8ac-41f3fa027979 failed (during ExecutingWorkflowState): Job chip.spr:1:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.

which arises from:

ln: failed to access โ€˜*.pr1.tagAlign.gzโ€™: No such file or directory
ln: failed to access โ€˜*.pr2.tagAlign.gzโ€™: No such file or directory

Any help would be appreciated.

Details:
I installed AQUAS on a SGE cluster.
Cromwell: cromwell 34-unknown-SNAP
Conda: conda 4.3.30

Logs: debug_37.tar.gz

Failed to access โ€˜*.jsd.qcโ€™ (ENCSR936XTK test sample)

When attempting to run AQUAS on the test sample datasets (ENCSR936XTK FASTQs), I receive:

[2018-09-13 15:32:39,95] [error] WorkflowManagerActor Workflow 0846c6f1-989d-4622-8512-b1aebfb5ea97 failed (during ExecutingWorkflowState): Job chip.fingerprint:NA:1 exited with return code 1 which has not been declared as a valid return code.

which seems to arise from

ln: failed to access โ€˜*.jsd.qcโ€™: No such file or directory.

Any suggestions would be greatly appreciated. Thank you!

Details:
I installed AQUAS on a SGE cluster.
Cromwell: cromwell-34
Conda: conda 4.3.30

debug_22.tar.gz

Problems running on filesystems that do not allow hard-links

I'm running on a filesystem (beegfs) that does not allow hardlinks between different directories, and encountered the same problem as #63

Seems this is related to encode_merge_fastq.py using a hardlink when there is only one file to merge. I have successfully run the pipeline off a non-beeGFS filesystem

When I edit encode_merge_fastq.py to avoid using hard links even when 'merging' a single file, I am able to generate the expected fastq.gz files (/call-merge_fastq/shard-0/execution/R1/rep1-R1.subsampled.67.merged.fastq.gz), whereas I wasn't able to before. However, the pipeline still stalls with the following error

[error] WorkflowManagerActor Workflow f3c071d7-2d66-4047-b961-e6e9cfdaeec2 failed (during ExecutingWorkflowState): cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'merge_fastq_ctl.merged_fastq_R1': Failed to find index Success(WomInteger(0)) on array:
Success([])
0

It seems the merged fastq.gz is being created, but for some reason not found?

output.txt

bwa mem compatible?

This is not an issue but a inquire. Is this version officially compatible with "bwa mem"? Or is it still experimental? How should I specify the parameters to turn on the "bwa mem" in the pipeline? Thanks.

Unmatched control

Is there a way to use controls without specifying the pairing? For example, if I have three biological "chip.fastqs" replicates and only one control "chip.ctl_fastqs" replicate, can I avoid arbitrarily matching the control or must I assign it to one of "chip.ctl_fastqs_rep1", "chip.ctl_fastqs_rep2", or "chip.ctl_fastqs_rep3"? I assume that by assigning it, there will be differential peak calling (in the MACS2 context) for the specified pair only?

Failed to find index Success(WomInteger(0))

2018-07-08 17:12:05,62] [info] Running with database db.url = jdbc:hsqldb:mem:b928854f-1b26-4f74-8a45-e72f73b30968;shutdown=false;hsqldb.tx=mvcc
[2018-07-08 17:12:11,01] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2018-07-08 17:12:11,02] [info] [RenameWorkflowOptionsInMetadata] 100%
[2018-07-08 17:12:11,10] [info] Running with database db.url = jdbc:hsqldb:mem:b55a6427-b864-4e72-9858-1211b6533178;shutdown=false;hsqldb.tx=mvcc
[2018-07-08 17:12:11,42] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines .v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2018-07-08 17:12:11,42] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2018-07-08 17:12:11,43] [info] Using noop to send events.
[2018-07-08 17:12:11,69] [info] Slf4jLogger started
[2018-07-08 17:12:11,87] [info] Workflow heartbeat configuration:
{
"cromwellId" : "cromid-df3d320",
"heartbeatInterval" : "2 minutes",
"ttl" : "10 minutes",
"writeBatchSize" : 10000,
"writeThreshold" : 10000
}
[2018-07-08 17:12:11,91] [info] Metadata summary refreshing every 2 seconds.
[2018-07-08 17:12:11,95] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2018-07-08 17:12:11,95] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-07-08 17:12:11,95] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-07-08 17:12:12,71] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] PAPIQueryManager Running with 3 workers
[2018-07-08 17:12:12,74] [info] SingleWorkflowRunnerActor: Submitting workflow
[2018-07-08 17:12:12,78] [info] Unspecified type (Unspecified version) workflow 068f45d8-f29a-4335-8c0e-a711391af811 submitted
[2018-07-08 17:12:12,83] [info] SingleWorkflowRunnerActor: Workflow submitted 068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,83] [info] 1 new workflows fetched
[2018-07-08 17:12:12,83] [info] WorkflowManagerActor Starting workflow 068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,84] [info] WorkflowManagerActor Successfully started WorkflowActor-068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,84] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2018-07-08 17:12:12,84] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2018-07-08 17:12:12,85] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2018-07-08 17:12:12,89] [info] MaterializeWorkflowDescriptorActor [068f45d8]: Parsing workflow as WDL draft-2
[2018-07-08 17:13:26,22] [info] MaterializeWorkflowDescriptorActor [068f45d8]: Call-to-Backend assignments: chip.macs2 -> Local, chip.bam2ta_ctl -> Local, chip.spp_ppr2 -> Local, chip.bwa -> Local, chip .qc_report -> Local, chip.bwa_ctl -> Local, chip.filter -> Local, chip.overlap -> Local, chip.pool_ta -> Local, chip.idr_ppr -> Local, chip.filter_ctl -> Local, chip.macs2_pr2 -> Local, chip.spp_pr1 -> Local, chip.read_genome_tsv -> Local, chip.merge_fastq_ctl -> Local, chip.macs2_ppr1 -> Local, chip.pool_ta_pr2 -> Local, chip.trim_fastq -> Local, chip.overlap_ppr -> Local, chip.bam2ta -> Local, chip. pool_ta_ctl -> Local, chip.reproducibility_idr -> Local, chip.spp_pooled -> Local, chip.fraglen_mean -> Local, chip.spr -> Local, chip.bam2ta_no_filt -> Local, chip.macs2_pr1 -> Local, chip.fingerprint -> Local, chip.xcor -> Local, chip.spp_pr2 -> Local, chip.spp -> Local, chip.merge_fastq -> Local, chip.bam2ta_no_filt_R1 -> Local, chip.reproducibility_overlap -> Local, chip.overlap_pr -> Local, chip. idr -> Local, chip.macs2_ppr2 -> Local, chip.bwa_R1 -> Local, chip.spp_ppr1 -> Local, chip.macs2_pooled -> Local, chip.idr_pr -> Local, chip.choose_ctl -> Local, chip.pool_ta_pr1 -> Local
[2018-07-08 17:13:26,34] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:28,66] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.read_genome_tsv
[2018-07-08 17:13:28,67] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!align_only && !true_rep_only'. Running conditional section
[2018-07-08 17:13:28,68] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!true_rep_only'. Running conditional section
[2018-07-08 17:13:28,87] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:29,34] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: echo "Reading genome_tsv /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068 f45d8-f29a-4335-8c0e-a711391af811/call-read_genome_tsv/inputs/1532045310/mm10.tsv ..."
[2018-07-08 17:13:29,46] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d 8-f29a-4335-8c0e-a711391af811/call-read_genome_tsv/execution/script
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition NOT met: 'peak_caller_ == "macs2"'. Bypassing conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'enable_idr'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'enable_idr'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'peak_caller_ == "spp"'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'peak_caller_ == "spp"'. Running conditional section
[2018-07-08 17:13:29,71] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!align_only && !true_rep_only && enable_idr'. Running conditional section
[2018-07-08 17:13:31,99] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: job id: 18903
[2018-07-08 17:13:32,00] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: Status change from - to Done
[2018-07-08 17:13:32,80] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.merge_fastq_ctl, chip.merge_fastq
[2018-07-08 17:13:33,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:33,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: python $(which encode_merge_fastq.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-merge_fastq_ctl/shard-0/execution/write_tsv_609523603b8830d2bf2d45a4a71d8dd7.tmp

--nth 2
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: python $(which encode_merge_fastq.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-merge_fastq/shard-0/execution/write_tsv_577d047f7bb8a8ea8c6ee89ee3d97c7b.tmp

--nth 2
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29 a-4335-8c0e-a711391af811/call-merge_fastq/shard-0/execution/script
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8 -f29a-4335-8c0e-a711391af811/call-merge_fastq_ctl/shard-0/execution/script
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: job id: 18944
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: job id: 18946
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: Status change from - to Done
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: Status change from - to Done
[2018-07-08 17:13:38,92] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.bwa, chip.bwa_ctl
[2018-07-08 17:13:39,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Unrecognized runtime attribute keys: preemptible, disks, cpu, time, memory
[2018-07-08 17:13:39,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Unrecognized runtime attribute keys: preemptible, disks, cpu, time, memory
[2018-07-08 17:13:39,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: python $(which encode_bwa.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa/shard-0/inputs/1424220334/mm10_no_alt_analysis_set_ENCODE.fasta.tar
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa/shard-0/inputs/-1993312639/merge_fastqs_R1_RYBP.merged.fastq.gz

--nth 4
[2018-07-08 17:13:39,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: python $(which encode_bwa.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa_ctl/shard-0/inputs/1424220334/mm10_no_alt_analysis_set_ENCODE.fasta.tar
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa_ctl/shard-0/inputs/1654728541/merge_fastqs_R1_IgG.merged.fastq.gz

--nth 4
[2018-07-08 17:13:39,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-43 35-8c0e-a711391af811/call-bwa_ctl/shard-0/execution/script
[2018-07-08 17:13:39,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8 c0e-a711391af811/call-bwa/shard-0/execution/script
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: job id: 19040
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: job id: 19041
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:03:30,01] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:03:35,69] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.filter_ctl
[2018-07-08 18:03:35,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:03:35,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: python $(which encode_filter.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-filter_ctl/shard-0/inputs/1613485654/IgG.merged.bam


--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null
[2018-07-08 18:03:35,88] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a -4335-8c0e-a711391af811/call-filter_ctl/shard-0/execution/script
[2018-07-08 18:03:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: job id: 1170
[2018-07-08 18:03:36,99] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:07:57,26] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:08:02,93] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.bam2ta_no_filt, chip.filter
[2018-07-08 18:08:03,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:08:03,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:08:03,74] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: python $(which encode_bam2ta.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bam2ta_no_filt/shard-0/inputs/600163450/RYBP.merged.bam

--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0
[2018-07-08 18:08:03,74] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8- f29a-4335-8c0e-a711391af811/call-bam2ta_no_filt/shard-0/execution/script
[2018-07-08 18:08:03,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: python $(which encode_filter.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-filter/shard-0/inputs/600163450/RYBP.merged.bam


--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null
[2018-07-08 18:08:03,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-433 5-8c0e-a711391af811/call-filter/shard-0/execution/script
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: job id: 2502
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: job id: 2516
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:10:26,92] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:11:32,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0: 1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:16:25,24] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:16:26,14] [error] WorkflowManagerActor Workflow 068f45d8-f29a-4335-8c0e-a711391af81 1 failed (during ExecutingWorkflowState): Failed to evaluate job outputs:
Bad output 'filter_ctl.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'filter_ctl.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1 (StandardAsyncExecutionActor.scala:786)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfig urator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Failed to evaluate job outputs:
Bad output 'filter.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'filter.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1 (StandardAsyncExecutionActor.scala:786)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfig urator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2018-07-08 18:16:26,14] [info] WorkflowManagerActor WorkflowActor-068f45d8-f29a-4335-8c0e-a711391 af811 is in a terminal state: WorkflowFailedState
[2018-07-08 18:17:28,68] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
[2018-07-08 18:17:31,99] [info] Workflow polling stopped
[2018-07-08 18:17:32,12] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds
[2018-07-08 18:17:32,12] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds
[2018-07-08 18:17:32,15] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds
[2018-07-08 18:17:32,17] [info] Aborting all running workflows.
[2018-07-08 18:17:32,19] [info] JobExecutionTokenDispenser stopped
[2018-07-08 18:17:32,19] [info] WorkflowStoreActor stopped
[2018-07-08 18:17:32,24] [info] WorkflowLogCopyRouter stopped
[2018-07-08 18:17:32,24] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds
[2018-07-08 18:17:32,24] [info] WorkflowManagerActor stopped
[2018-07-08 18:17:32,24] [info] WorkflowManagerActor All workflows finished
[2018-07-08 18:17:32,24] [info] Connection pools shut down
[2018-07-08 18:17:32,24] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down JobStoreActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] SubWorkflowStoreActor stopped
[2018-07-08 18:17:32,24] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down DockerHashActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down IoProxy - Timeout = 1800 seconds
[2018-07-08 18:17:32,27] [info] DockerHashActor stopped
[2018-07-08 18:17:32,27] [info] IoProxy stopped
[2018-07-08 18:17:32,30] [info] KvWriteActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] WriteMetadataActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] CallCacheWriteActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] CallCacheWriteActor stopped
[2018-07-08 18:17:32,33] [info] JobStoreActor stopped
[2018-07-08 18:17:32,34] [info] ServiceRegistryActor stopped
[2018-07-08 18:17:32,36] [info] Database closed
[2018-07-08 18:17:32,36] [info] Stream materializer shut down
Workflow 068f45d8-f29a-4335-8c0e-a711391af811 transitioned to state Failed
[2018-07-08 18:17:32,46] [info] Automatic shutdown of the async connection
[2018-07-08 18:17:32,46] [info] Gracefully shutdown sentry threads.
[2018-07-08 18:17:32,46] [info] Shutdown finished.

Error during running the test data

OS or Platform: CentOS Linux release 7.4.1708
Cromwell/dxWDL version: cromwell-34.jar
Conda version: conda 4.3.30

I am trying to run the chip-seq pipeline on the test data (ENCSR936XTK) as per the SGE installation guidelines. I get the same error for

[2018-09-20 14:56:32,34] [error] WorkflowManagerActor Workflow 6e1aee94-5c4d-451d-99c3-5ae5990a7548 failed (during ExecutingWorkflowState): Job chip.xcor:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.

which is related to this

Traceback (most recent call last): File "~/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_common.py", line 224, in run_shell_cmd p.returncode, cmd) subprocess.CalledProcessError: Command 'Rscript --max-ppsize=500000 $(which run_spp.R) -rf -c=rep2-R1.subsampled.67.merged.trim_50bp.no_chrM.15.0M.tagAlign.gz -p=2 -filtchr=chrM -savp=rep2-R1.subsampled.67.merged.trim_50bp.no_chrM.15.0M.cc.plot.pdf -out=rep2-R1.subsampled.67.merged.trim_50bp.no_chrM.15.0M.cc.qc ' returned non-zero exit status 1

I have attached the debug files here.
debug_22.tar.gz

I am working on SGE and ran the command using cromwell-34.jar.

Originally posted by @shanmukhasampath in #22 (comment)

split into subworkflows

As you try to make it work with any input time possible the chip.wdl is a big mess and hard to deal with, I suggest that you make chip.wdl as a wrapper that deals with different types of intputs and then redirects execution to proper subworkflows depending on input types.

Pipeline crashes because fragment length is negative

Hi,
Thank you for great pipeline! The version I am working with is 1.2.2. Some of my pipelines crash on the macs2 step (with error --extsize must >= 1!). I have investigated this and it turned out that that the fragment length estimated by the xcor is negative (-420):

...
Control data: NA 
strandshift(min): -500 
strandshift(step): 5 
strandshift(max) 1500 
user-defined peak shift NA 
exclusion(min): -500 
exclusion(max): 100 
num parallel nodes: 4 
FDR threshold: 0.01 
NumPeaks Threshold: NA 
Output Directory: . 
narrowPeak output file name: NA 
regionPeak output file name: NA 
Rdata filename: NA 
...
Top 3 cross-correlation values 0.0117837272740937,0.0117782782163765,0.011773314474586 
Top 3 estimates for fragment length -420,1450,1310
Window half size 1475 

I noticed that this was previously reported and it was addressed in version 1.2.0:

exclusion range for cross-correlation analysis
adding -x=min:max to run_spp.R in xcor
this will prevent xcor estimates wrong (negative) fragment len

So I modified my file to and added "chip.xcor_exclusion_range_min": 10 to the input file. However the results was the same.

So my question is how I can prevent the negative values coming from xcor? I might not understood the purpose of the parameter so I could have used it wrongly. In the #71 there is a possible solution but I do not want to provide data to all the pipelines because I would prefer it to be calculated. As far as I understand, if the fraglen is provided by me the xcor is skipped. It would be nice that if the xcor fails, it would take the fraglen from the table. Otherwise, I need to launch the pipeline for the second time and it is a waste of time and resources.

Also for me the macs2 as a peak caller is not of interest and the pipeline could continue even without it.

Thanks

no encode_merge_fastq.py in PATH

There appears to be something missing in v1.2.2. When we run "caper run chip.wdl -i examples/local/ENCSR936XTK_subsampled_chr19_only.json" from the git clone directory, it fails with the above error. Is this code that should be in the Singularity image?

If I try to fix this by adding the path for "src" (which contains that program as well as others), then I get a different error.

"ENCODE DCC reproducibility QC.: error: argument --peaks-pr: expected at least one argument"

This is using the Singularity/SGE method, which worked fine for v1.1.6.

[error] Error parsing generated wdl:

Got issues when running the test samples.

(encode-chip-seq-pipeline) [ye.liu@submit chip-seq-pipeline2]$ java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run chip.wdl -i ${INPUT} -o workflow_opts/singularity.json -m ${PIPELINE_METADATA}
[2019-04-16 16:17:55,29] [info] Running with database db.url = jdbc:hsqldb:mem:56b66269-54d2-4a0e-af42-3e525d21846c;shutdown=false;hsqldb.tx=mvcc
[2019-04-16 16:18:04,19] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-16 16:18:04,20] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-16 16:18:04,29] [info] Running with database db.url = jdbc:hsqldb:mem:079abec0-ecbf-460b-972f-53d84fcb434c;shutdown=false;hsqldb.tx=mvcc
[2019-04-16 16:18:04,66] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-16 16:18:04,69] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-16 16:18:04,69] [info] Using noop to send events.
[2019-04-16 16:18:05,00] [info] Slf4jLogger started
[2019-04-16 16:18:05,25] [info] Workflow heartbeat configuration:
{
"cromwellId" : "cromid-4749fc3",
"heartbeatInterval" : "2 minutes",
"ttl" : "10 minutes",
"writeBatchSize" : 10000,
"writeThreshold" : 10000
}
[2019-04-16 16:18:05,29] [info] Metadata summary refreshing every 2 seconds.
[2019-04-16 16:18:05,36] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-16 16:18:05,36] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-16 16:18:05,36] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-16 16:18:06,17] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-16 16:18:06,19] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-16 16:18:06,19] [info] JES batch polling interval is 33333 milliseconds
[2019-04-16 16:18:06,19] [info] JES batch polling interval is 33333 milliseconds
[2019-04-16 16:18:06,19] [info] JES batch polling interval is 33333 milliseconds
[2019-04-16 16:18:06,19] [info] PAPIQueryManager Running with 3 workers
[2019-04-16 16:18:06,20] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-16 16:18:06,26] [info] Unspecified type (Unspecified version) workflow 7f0b27fc-a2c9-42c2-b9ea-2cd6517162c3 submitted
[2019-04-16 16:18:06,31] [info] SingleWorkflowRunnerActor: Workflow submitted 7f0b27fc-a2c9-42c2-b9ea-2cd6517162c3
[2019-04-16 16:18:06,32] [info] 1 new workflows fetched
[2019-04-16 16:18:06,32] [info] WorkflowManagerActor Starting workflow 7f0b27fc-a2c9-42c2-b9ea-2cd6517162c3
[2019-04-16 16:18:06,33] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-16 16:18:06,33] [info] WorkflowManagerActor Successfully started WorkflowActor-7f0b27fc-a2c9-42c2-b9ea-2cd6517162c3
[2019-04-16 16:18:06,33] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-16 16:18:06,34] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-16 16:18:06,41] [info] MaterializeWorkflowDescriptorActor [7f0b27fc]: Parsing workflow as WDL draft-2
[2019-04-16 16:19:25,60] [info] MaterializeWorkflowDescriptorActor [7f0b27fc]: Call-to-Backend assignments: chip.idr -> local, chip.bam2ta_no_filt -> local, chip.pool_ta_pr2 -> local, chip.spp_ppr2 -> local, chip.overlap -> local, chip.idr_pr -> local, chip.macs2_ppr2 -> local, chip.bam2ta_no_filt_R1 -> local, chip.overlap_ppr -> local, chip.fingerprint -> local, chip.macs2_pr2 -> local, chip.choose_ctl -> local, chip.bam2ta -> local, chip.bwa -> local, chip.merge_fastq_ctl -> local, chip.macs2 -> local, chip.idr_ppr -> local, chip.pool_ta -> local, chip.macs2_pooled -> local, chip.filter_ctl -> local, chip.read_genome_tsv -> local, chip.bwa_R1 -> local, chip.spp_pr2 -> local, chip.spp -> local, chip.spp_pooled -> local, chip.filter -> local, chip.xcor -> local, chip.macs2_ppr1 -> local, chip.bwa_ctl -> local, chip.macs2_pr1 -> local, chip.spr -> local, chip.qc_report -> local, chip.spp_pr1 -> local, chip.reproducibility_idr -> local, chip.spp_ppr1 -> local, chip.fraglen_mean -> local, chip.overlap_pr -> local, chip.bam2ta_ctl -> local, chip.reproducibility_overlap -> local, chip.trim_fastq -> local, chip.merge_fastq -> local, chip.pool_ta_ctl -> local, chip.pool_ta_pr1 -> local
[2019-04-16 16:19:25,66] [error] Error parsing generated wdl:

java.lang.RuntimeException: Error parsing generated wdl:

at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:55)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace$lzycompute(ConfigInitializationActor.scala:39)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace(ConfigInitializationActor.scala:39)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations$lzycompute(ConfigInitializationActor.scala:42)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations(ConfigInitializationActor.scala:41)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder(ConfigInitializationActor.scala:52)
at cromwell.backend.standard.StandardInitializationActor.coerceDefaultRuntimeAttributes(StandardInitializationActor.scala:82)
at cromwell.backend.BackendWorkflowInitializationActor.initSequence(BackendWorkflowInitializationActor.scala:154)
at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(BackendWorkflowInitializationActor.scala:152)
at cromwell.backend.standard.StandardInitializationActor.initSequence(StandardInitializationActor.scala:44)
at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:145)
at cromwell.backend.BackendLifecycleActor.performActionThenRespond(BackendLifecycleActor.scala:44)
at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(BackendLifecycleActor.scala:40)
at cromwell.backend.standard.StandardInitializationActor.performActionThenRespond(StandardInitializationActor.scala:44)
at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.applyOrElse(BackendWorkflowInitializationActor.scala:145)
at akka.actor.Actor.aroundReceive(Actor.scala:517)
at akka.actor.Actor.aroundReceive$(Actor.scala:515)
at cromwell.backend.standard.StandardInitializationActor.aroundReceive(StandardInitializationActor.scala:44)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
at akka.actor.ActorCell.invoke(ActorCell.scala:557)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: java.lang.NullPointerException: null
at wdl.draft2.model.WdlNamespace$.apply(WdlNamespace.scala:196)
at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:160)
at scala.util.Try$.apply(Try.scala:209)
at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:160)
at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:156)
at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.(ConfigWdlNamespace.scala:53)
... 27 common frames omitted

Error Using Singularity

Even correctly using the Singularity_BINDPATH, and adding the necessary folders, the system breaks during xcor. The error msg claims that it can't find mm10.blacklist.bed.gz but all other files were located. I tried to add extra path but it didn't work.

singularity --version
2.6.1-dist

Traceback (most recent call last):
  File "/software/chip-seq-pipeline/src/encode_fingerprint.py", line 109, in <module>
    main()
  File "/software/chip-seq-pipeline/src/encode_fingerprint.py", line 101, in main
    args.bams, args.ctl_bam, args.blacklist, args.nth, args.out_dir)
  File "/software/chip-seq-pipeline/src/encode_fingerprint.py", line 76, in fingerprint
    run_shell_cmd(cmd)
  File "/software/chip-seq-pipeline/src/encode_common.py", line 252, in run_shell_cmd
    raise Exception(err_str)
Exception: PID=25419, PGID=25419, RC=1
STDERR=/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.1-py2.7-linux-x86_64.egg/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
Traceback (most recent call last):
  File "/usr/local/bin/plotFingerprint", line 11, in <module>
    main(args)
  File "/usr/local/lib/python2.7/dist-packages/deeptools/plotFingerprint.py", line 376, in main
    num_reads_per_bin = cr.run()
  File "/usr/local/lib/python2.7/dist-packages/deeptools/countReadsPerBin.py", line 318, in run
    transcript_id_designator=transcript_id_designator)
  File "/usr/local/lib/python2.7/dist-packages/deeptools/mapReduce.py", line 88, in mapReduce
    blackList = GTF(blackListFileName)
  File "/usr/local/lib/python2.7/dist-packages/deeptoolsintervals/parse.py", line 584, in __init__
    fp = openPossiblyCompressed(fname)
  File "/usr/local/lib/python2.7/dist-packages/deeptoolsintervals/parse.py", line 102, in openPossiblyCompressed
    with open(fname, "rb") as f:
IOError: [Errno 2] No such file or directory: '/gamma_data1/trezende/Mariy/FirstBatch/raw/ChIP/cromwell-executions/chip/c9a3a637-0769-48f4-b338-d5f5dc7776ea/call-fingerprint/inputs/961135001/mm10.blacklist.bed.gz'
STDOUT=

Thanks for your help,

Tiago

Pipeline stalls at peak-calling steps

I've been trying to run the pipeline on the test data, and it seems to stall when it gets to peak-calling every time. I've been using qlogin on our SGE server and putting in the command manually, after activating the proper conda environment.

command:
java -jar -Dconfig.file=backends/backend.conf cromwell-36.jar run chip.wdl -i ${INPUT}

I've attached the log.

workflow.ed8d0870-e114-4af9-8a75-3b85625caa50.log

Error at run_spp.R

Describe the bug
A clear and concise description of what the problem is.
I followed the instruction at https://github.com/ENCODE-DCC/chip-seq-pipeline2 and successfully installed the pipeline. I have a single end ChIPSeq and control FASTQ file. The process from Fastq to tagAlign was good but I got error at running "run_spp.R".

OS/Platform and dependencies

  • OS or Platform: [e.g. Ubuntu 16.04, Google Cloud, Stanford Sherlock/SCG cluster, ...]
    LSB Version: :core-4.1-amd64:core-4.1-noarch
    Distributor ID: Fedora
    Description: Fedora release 26 (Twenty Six)
    Release: 26
    Codename: TwentySix
    Linux lri-107577 4.15.6-200.fc26.x86_64 #1 SMP Mon Feb 26 18:51:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Conda version: If you have used Conda ($ conda --version).
    conda 4.7.10

  • Caper version
    0.4.1

  • If not using Caper

    • Cromwell/dxWDL version: [e.g. cromwell-42.jar, dxWDL-78.jar]

Attach error logs
Caper users:
Caper automatically run troubleshooter for failed workflows. If not and If you ran a Caper server and then get workflow_id of your failed workflow with caper list. Or directory use a metadata.json file on Cromwell's output directory

$ caper troubleshoot [WORKFLOW_ID_OR_METADATA_JSON_FILE]

The output log file can be found at 

https://drive.google.com/open?id=1iDKjGAlW1duK85MoPACkIeH9opiAKznr

In summary,
/bin/bash: line 1: 125412 Segmentation fault      (core dumped) Rscript --max-ppsize=500000 $(which run_spp.R) -rf -c=16_05T9_00S9CCF_HCT116-AHT_TotalRNAPol2_hs_i95.merged.trim_50bp.filt.no_chrM.15M.tagAlign.gz -p=2 -filtchr="chrM" -savp=16_05T9_00S9CCF_HCT116-AHT_TotalRNAPol2_hs_i95.merged.trim_50bp.filt.no_chrM.15M.cc.plot.pdf -out=16_05T9_00S9CCF_HCT116-AHT_TotalRNAPol2_hs_i95.merged.trim_50bp.filt.no_chrM.15M.cc.qc -x=-500:100

"Failed to evaluate 'chip.macs2.tas'" when running pipeline from BAM files

I'm running the pipeline on a SLURM cluster with Conda version 4.6.14 and cromwell-38. I'm not entirely sure what the error is caused by, and the error is too long to post in full. I've attached the output file containing the error, as well as the debug tarball. The error does not show up in stderr.

Error Snippet
[2019-07-23 15:52:48,29] [ESC[38;5;1merrorESC[0m] WorkflowManagerActor Workflow 669812ad-7194-4160-ab58-5bfa9b7c8520 failed (during ExecutingWorkflow
State): java.lang.RuntimeException: Failed to evaluate 'chip.macs2.tas' (reason 1 of 1): Evaluating flatten([[ta_[i]], chosen_ctl_tas[i]]) failed: :
Failed to find index Success(WomInteger(23)) on array:

Success([[], [], [], [], [], [], [], [], [], []])

Output
slurm.out.11839572.2.txt

debug_81.tar.gz

unrecognized arguments: --mito-chr-name chrM

Describe the bug
I've installed the pipeline using Conda RedHatEnterpriseServer. I get the unrecognized argument error at the filter step (error from stderr is attached below).

OS/Platform and dependencies

  • OS or Platform: Red Hat Enterprise Linux Server release 7.6 (Maipo)
  • Cromwell/dxWDL version: cromwell-34.jar
  • Conda version: conda v.4.3.21.

Attach error logs

[2019-04-04 00:52:03,01] [error] WorkflowManagerActor Workflow 6a90302c-a328-4d5e-a7a8-2db0423feeb5 failed (during ExecutingWorkflowState): Job chip.filter_ctl:0:1 exited with return code 2 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.
Check the content of stderr for potential additional information: ~/chip-seq-pipeline2/cromwell-executions/chip/6a90302c-a328-4d5e-a7a8-2db0423feeb5/call-filter_ctl/shard-0/execution/stderr.
 usage: ENCODE DCC filter. [-h] [--dup-marker {picard,sambamba}]
                          [--mapq-thresh MAPQ_THRESH] [--no-dup-removal]
                          [--paired-end] [--multimapping MULTIMAPPING]
                          [--nth NTH] [--out-dir OUT_DIR]
                          [--log-level {NOTSET,DEBUG,INFO,WARNING,CRITICAL,ERROR,CRITICAL}]
                          bam
ENCODE DCC filter.: error: unrecognized arguments: --mito-chr-name chrM

Thanks in advance!

call-spp_ppr1 wall time exceeds 48h

Hi,

I'm running the pipeline on the test data on slurm. I encountered the problem that when call-spp_ppr1 script is being submitted, the request wall-time is 72 hours. Can this be lowered somehow? I looked at the input.json and it doesn't look like we could change the wall-time for this particular thing?

duplicate key in qc_report call input

The input macs2_cap_num_peak appears to be there twice

chip-seq-pipeline2/chip.wdl

Lines 929 to 938 in a5cfe39

call qc_report { input :
pipeline_ver = pipeline_ver,
title = title,
description = description,
genome = basename(genome_tsv),
paired_end = paired_end,
pipeline_type = pipeline_type,
peak_caller = peak_caller_,
macs2_cap_num_peak = macs2_cap_num_peak,
macs2_cap_num_peak = spp_cap_num_peak,

IDR-related issue on histone pipeline

I'm having a persistent issue that seems to be related to IDR. I'm running the pipeline on a SLURM cluster with Conda version 4.6.14 and cromwell-38. I am also running the pipeline for histone ChIP-seq, so I'm not supposed to be finding IDR. To try and fix this, I disabled IDR in my chip.wdl file, but still got the error.
I attached the debug.tar.gz file below. Thanks!

Error Log:

Traceback (most recent call last):
  File "/home/amulyag/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_idr.py", line 169, in <module>
    main()
  File "/home/amulyag/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_idr.py", line 137, in main
    args.idr_thresh, args.idr_rank, args.out_dir)
  File "/home/amulyag/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_idr.py", line 93, in idr
    run_shell_cmd(cmd1)
  File "/home/amulyag/miniconda3/envs/encode-chip-seq-pipeline/bin/encode_common.py", line 252, in run_shell_cmd
    raise Exception(err_str)
Exception: PID=73703, PGID=73703, RC=127
STDERR=/bin/bash: line 1: idr: command not found
STDOUT=

debug.tar.gz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.