sneuensc / mapache Goto Github PK
View Code? Open in Web Editor NEWmapping pipeline for ancient DNA
License: GNU General Public License v3.0
mapping pipeline for ancient DNA
License: GNU General Public License v3.0
"finshied" should be "finished"
how to make changes in the code on the github web interface. getting the branches wrong.
Hi, thank you for developing this pipeline.
I'm trying to run the Dry Run but I get this error every time:
TypeError in line 751 of /home/bromero/genotyping/mapache/workflow/scripts/utils.py:
stat: path should be string, bytes, os.PathLike or integer, not dict
File "/home/bromero/genotyping/mapache/workflow/Snakefile", line 74, in
File "/home/bromero/genotyping/mapache/workflow/scripts/utils.py", line 751, in set_chromosome_names
File "/home/bromero/anaconda3/envs/mapache/lib/python3.9/genericpath.py", line 30, in isfile
TypeError in line 751 of /home/bromero/genotyping/mapache/workflow/scripts/utils.py:
stat: path should be string, bytes, os.PathLike or integer, not dict
File "/home/bromero/genotyping/mapache/workflow/Snakefile", line 74, in
File "/home/bromero/genotyping/mapache/workflow/scripts/utils.py", line 751, in set_chromosome_names
File "/home/bromero/anaconda3/envs/mapache/lib/python3.9/genericpath.py", line 30, in isfile
If I go to the ./workflow/Snakemake file, the line 74 says:
for genome in GENOMES:
set_chromosome_names(genome)
Which is maybe related to the error message:
""stat: path should be string, bytes, os.PathLike or integer, not dict""
I honestly have no idea but any help would be appreciated since I'm very interested in running this!
Thank you in advance,
Bruno
As explained here (https://stackoverflow.com/questions/17386880/does-anaconda-create-a-separate-pythonpath-variable-for-each-new-environment), if PYTHONPATH is set, one might run into troubles trying to load libraries despite having the library installed in your mapache environment.
In my case, this was solved with unset PYTHONPATH
.
Attaching error message below.
ImportError in line 1 of /work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/workflow/rules/common.smk:
Unable to import required dependencies:
numpy:IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
- The Python version is: Python3.9 from "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/bin/python3.9"
- The NumPy version is: "1.20.3"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.Original error was: No module named 'numpy.core._multiarray_umath'
File "/work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/workflow/Snakefile", line 14, in
File "/work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/workflow/rules/common.smk", line 1, in
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/site-packages/pandas/init.py", line 16, in
Hello Sam,
I was trying to run the current version of mapache with the test data, but I am having some issues, and I am not sure what's going on. The first one had to do with the basename in the AdapterRemoval rule, but I adapted the rule to something that works for me (I just committed the change to the branch dev).
The next error seems to be linked to scripts/picard_indexing.py
:
[Thu Jun 17 10:23:13 2021]
Job 6: --- PICARD CreateSequenceDictionary results/00_reference/hg19/hg19.fasta
Traceback (most recent call last):
File "/work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/.snakemake/scripts/tmpdywu9dgq.picard_indexing.py", line 11, in <module>
filename, file_extension = os.path.splitext(fasta)
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/posixpath.py", line 118, in splitext
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not dict
[Thu Jun 17 10:23:14 2021]
Error in rule genome_index_picard:
jobid: 6
output: results/00_reference/hg19/hg19.dict
log: results/logs/index/00_reference/hg19/picard_hg19.log (check log file(s) for error message)
RuleException:
CalledProcessError in line 105 of /work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/workflow/rules/Snakefile_index.smk:
Command 'set -euo pipefail; /work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/bin/python3.9 /work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/.snakemake/scripts/tmpdywu9dgq.picard_indexing.py' returned non-zero exit status 1.
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 2357, in run_wrapper
File "/work/FAC/FBM/DBC/amalaspi/americas/dcruzdav/mapache/workflow/rules/Snakefile_index.smk", line 105, in __rule_genome_index_picard
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 574, in _callback
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/concurrent/futures/thread.py", line 52, in run
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 560, in cached_or_run
File "/work/FAC/FBM/DBC/amalaspi/popgen/dcruzdav/conda/envs/mapache/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 2390, in run_wrapper
Please let me know if you need more details of my environment (mapache) to debug this issue.
going through the wiki to try and run the new software from scratch!
I have been testing imputation but came across an error when trying to impute more chromosomes than one:
snakemake --cores 12
ERROR: In config[imputation][hg19][chromosomes], the following chromosome names are not recognized: ['x', 'range(1,23)]+["X","Y"]', 'in', 'for', '[str(x)']!
This is the line in config.yaml
chromosomes: '[str(x) for x in range(1,23)]+["X","Y"]'
Snakemake is version 6.4.1
following the installation steps:
the command above does not work on my end. I wonder if i was meant to do crtl+c on all
among the error outputted: "conda-build version : not installed"
@lucas-anchieri @cegamorim do you also have that issue?
the current wiki documentation worked for me!
the only issue is that i have no idea what i was doing.
hence, some questions to discuss when we have a meeting? i can assign that to myself as well:
should the installation be characterized? for instance: "running it on a SLURM cluster with rule based conda environments and modules2; is this what i did?
where is the test data and how does the software know which fastq etc to use?
what does the report include
what is included in the acyclic graph
what are we meant to do with the graph (is it meant to be piped somewhere and open with an editor)?
picard.sam.markduplicates.MarkDuplicates done. Elapsed time: 20.70 minutes.
Runtime.totalMemory()=1959788544
To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:3181)
at java.util.ArrayList.grow(ArrayList.java:267)
at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:241)
at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:233)
at java.util.ArrayList.add(ArrayList.java:464)
at htsjdk.samtools.SAMUtils.getAlignmentBlocks(SAMUtils.java:743)
at htsjdk.samtools.SAMRecord.getAlignmentBlocks(SAMRecord.java:1919)
at htsjdk.samtools.SAMRecord.validateCigar(SAMRecord.java:1937)
at htsjdk.samtools.BAMRecord.getCigar(BAMRecord.java:284)
at htsjdk.samtools.SAMRecord.isValid(SAMRecord.java:2233)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.advance(BAMFileReader.java:848)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:834)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:802)
at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:591)
at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:570)
at picard.sam.markduplicates.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:524)
at picard.sam.markduplicates.MarkDuplicates.doWork(MarkDuplicates.java:257)
at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:308)
at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:103)
at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:113)
under development:
What is new on github version, but not yet in the release?
TODO
moved to slack
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.