Giter Club home page Giter Club logo

vtam's People

Contributors

aitgon avatar meglecz avatar mrmekdad avatar raphaelhebert avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

vtam's Issues

Suitability for ONT data?

Hi,

Is VTAM, like DADA2, dependend on the sequencing technique used and thus limited to Illumina sequencing? Or can it also be used for ONT sequencing data? I can't find it in the article, although I saw it was tested on MiSeq data.

sortreads: add option to make anchored/non-anchored search when searching for tags (demultiplexing)

Demultiplexing should be anchored by default, but we should give the possibility to users to make it non-anchored. The non-anchored search is slower, but it is necessary if there is a spacer before the tag.
The present version is a non-anchored search.

The choice of anchored/non-anchored search should be independent for demultiplexing and for trimming primers.

Format of barcodes.fasta without anchored search :

>marker-run-sample-replicate
tcgatcacgatgt;min_overlap=length-of-the-tag...gctgtagatcgaca;min_overlap=length-of-the-tag

Pb vtam-0.1.21

After upgrading to vtam-0.1.21 by
python3 -m pip install --upgrade vtam
there is an error message (python3 -m pip install --upgrade vtam) after issuing any vtam command.

Error taxonomy

(vtam) emese@pcf-meglecz:~/vtam_demo/vtam_db$ vtam taxonomy --output taxonomy.tsv

new_taxdump.tar.gz: : 119824384it [00:42, 2834732.87it/s]
Traceback (most recent call last):
File "/home/emese/miniconda3/envs/vtam/bin/vtam", line 8, in
sys.exit(main())
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/site-packages/vtam/init.py", line 273, in main
VTAM(sys.argv[1:])
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/site-packages/vtam/init.py", line 244, in init
taxonomy.main(precomputed=precomputed)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/site-packages/vtam/CommandTaxonomy.py", line 188, in main
self.create_denovo_from_ncbi()
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/site-packages/vtam/CommandTaxonomy.py", line 81, in create_denovo_from_ncbi
tar.extractall(path=self.tempdir)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/tarfile.py", line 2000, in extractall
numeric_owner=numeric_owner)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/tarfile.py", line 2042, in extract
numeric_owner=numeric_owner)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/tarfile.py", line 2112, in _extract_member
self.makefile(tarinfo, targetpath)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/tarfile.py", line 2161, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/tarfile.py", line 247, in copyfileobj
buf = src.read(bufsize)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/gzip.py", line 287, in read
return self._buffer.read(size)
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/home/emese/miniconda3/envs/vtam/lib/python3.7/gzip.py", line 482, in read
uncompress = self._decompressor.decompress(buf, size)
zlib.error: Error -3 while decompressing data: invalid block type

coi_blast_db error

There is a problem of downloading the blast database with the command coi_blast_db.
Curiously, the database is correctly downloaded from the example command.

mkdir ~/vtam_benchmark/vtam_db
cd  ~/vtam_benchmark/vtam_db
conda activate vtam

vtam coi_blast_db --blastdbdir coi_db

(vtam) 11:32 meglecz@bombyx ~/vtam_benchmark/vtam_db % vtam coi_blast_db --blastdbdir coi_db
Traceback (most recent call last):
  File "/home/meglecz/miniconda3/envs/vtam_2/bin/vtam", line 8, in <module>
    sys.exit(main())
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/__init__.py", line 308, in main
    VTAM(sys.argv[1:])
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/__init__.py", line 276, in __init__
    coi_blast_db.download(blastdbdir=blastdbdir)
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/CommandBlastCOI.py", line 82, in download
    if not [*os.walk(blastdbdir)][0][2] >= blast_files or pathlib.Path(os.path.join(blastdbdir, "%s.nsq"%self.blastdbname)).stat().st_size < 4000000:
IndexError: list index out of range
zsh: exit 1     vtam coi_blast_db --blastdbdir coi_db

Error make_known_occurrences

The following command returns an error

vtam make_known_occurrences  --asvtable asvtable_default_mfzr.tsv --sample_types sample_types_fish.tsv --mock_composition mock_composition_fish_mfzr.tsv 


(vtam_2) 10:55 meglecz@bombyx ~/vtam_benchmark/vtam_fish/test % vtam make_known_occurrences  --asvtable asvtable_default_mfzr.tsv --sample_types sample_types_fish.tsv --mock_composition mock_composition_fish_mfzr.tsv
Traceback (most recent call last):
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
    return self._engine.get_loc(casted_key)
  File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'action'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/meglecz/miniconda3/envs/vtam_2/bin/vtam", line 8, in <module>
    sys.exit(main())
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/__init__.py", line 308, in main
    VTAM(sys.argv[1:])
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/__init__.py", line 291, in __init__
    CommandMakeKnownOccurrences.main(asvTable=asvTable,sampleTypes=sampleTypes,mockComposition=mockComposition,habitat_proportion=habitat_proportion,known_occurrences=known_occurrences,missing_occurrences=missing_occurrences)
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/vtam/CommandMakeKnownOccurrences.py", line 21, in main
    occurrences = mock[mock['action']!='tolerate'].replace(np.nan, '', regex=True).astype(str)
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/pandas/core/frame.py", line 3458, in __getitem__
    indexer = self.columns.get_loc(key)
  File "/home/meglecz/miniconda3/envs/vtam_2/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
    raise KeyError(key) from err
KeyError: 'action'
zsh: exit 1     vtam make_known_occurrences --asvtable asvtable_default_mfzr.tsv    

The input files are attached (change the tsv extention to csv, since tsv files are not accepted)
asvtable_default_mfzr.csv
mock_composition_fish_mfzr.csv
sample_types_fish.csv

Rename known_occurrences option in filter

The option known_occurrences of filter is used to add columns to the asv table with expected variants in mocks

It would be better to rename the option known_occurrences to mock_composition, to avoid confusion between a complete known_occurrences.tsv file with delete ad keep occurrences and this one that describes only keep occurrences.

This request is linked to the modification asked in the issue "make automatically a complete known_occurrences.tsv"

compressed input/output for 'merge'

Modify merge to accept zipped input files (by default) and produce zipped output
Make option to use unzipped I/O.

  • vsearch detects automatically if its input is compressed or not. It is enough to put .gz filenames to fastqinfo.
  • zip output fasta files. It seems to me, that vsearch reads gz files, but produces only unzipped output.
  • pigz can be interesting for zipping with multi-threading, since files can be very large (several Gb for novaseq).

Refine taxassign algo

At present, all sequences in the reference database are used if they are among the best hits, irrespective of the resolution of their taxon. Some are assigned to a species level, others to a higher level.
This can reduce the taxonomic resolution: For example if we have 2 hits at 97% identity, where 1 reference sequence is identified to the species, but the other only to the family, the variant will be assigned to the family.

I suggest that the users should be able to set the minimum resolution of the reference sequences for each %identity.
It can be something like this
100% species
97% genus
95% family
90% order
85% class
80% phylum

I have already made a taxonomy file with an additional column that contains the resolution index:
8: species
7: genus
6 : family
5 : order
4 : class
3 : phylum
2 : kingdom
1 : superkingdom
For other levels the index is a non-integer. e.g. 7.5 for subgenus.
This simplifies greatly the selection of the reference sequences.

taxassign qcov_hsp_perc

When tryig to modify the qcov_hsp_perc parameter using a --params option in taxassign, vtam does not seem to take into account the modified value.

When taxassign is woking, I have the exact same output with the default 80 percent value and with 40 I am trying to use.
When taxassign fails, I see from the error message, that it uses 80, even if I have used the param file to modify it to 40:

"Bio.Application.ApplicationError: Non-zero return code 2 from 'blastn -out /tmp/tmpu_83znlc/RunnerBlast.py/blast_output.tsv -outfmt "6 qseqid sacc pident evalue qcovhsp staxids" -query /tmp/tmpu_83znlc/RunnerTaxAssign.py/variant.fasta -db nt -evalue 1e-05 -qcov_hsp_perc 80 -num_threads 8 -dust yes'"

Option for sortreads to check both strands

At the moment sortreads check both strands of input sequences. By default, it should check only the forward strand. This reduces strongly the run time and corresponds to most of the datasets.

Make an option to check both strands.

sortreads: add option to make anchored/non-anchored search when trimming primers

Trimming primers should be anchored by default, but we should give the possibility to users to make it non-anchored. The non-anchored search is slower, but it is necessary if there is a spacer between the tag and the primer.
The present version is a non-anchored search.

The choice of anchored/non-anchored search should be independent for demultiplexing and for trimming primers.

By default use anchored search

cutadapt --cores=0 -e 0.1 --no-indels --trimmed-only --minimum-length 50 --maximum-length 500 --front "^GGNTGAACNGTNTAYCCNCC...TGRTTYTTYGGNCAYCCHGAAGTWTA$" --output marker-run-sample-replicate.fasta.gz tagtrimmed.marker-run-sample-replicate.fasta.gz

rm marker-run-sample-replicate.fasta.gz

non-anchored version

cutadapt --cores=0 -e 0.1 --no-indels --trimmed-only --minimum-length 50 --maximum-length 500 --front "GGNTGAACNGTNTAYCCNCC;min_overlap=length-of-the-primer...TGRTTYTTYGGNCAYCCHGAAGTWTA;min_overlap=length-of-the-primer" --output marker-run-sample-replicate.fasta.gz tagtrimmed.marker-run-sample-replicate.fasta.gz

rm marker-run-sample-replicate.fasta.gz

Variant sequences in double

In the sqlite database occasionally there are sequences in upper case and in lower case. Some sequences are identical (apart from the lc/uc). Sequences in lc do not have read counts.

I guess that this comes from using taxassign for sequences that are not yet in the sqlite db, all sequences are added to the db (in lower case letters), even if they are identical to a variant already in the database (upper case letters). In this way, the same sequence can have different varIDs.
I would prefer to eliminate his redundancy, and use the same ID systematically for identical sequences.

sqlalchemy.exc.ArgumentError with latest sqlachemy version (v2.0.4)

Hello there,

FWIW I ran into the following problem when I was running the snake example.

$ vtam filter --db asper1/db.sqlite --sortedinfo asper1/run1_mfzr/sorted/sortedinfo.tsv --sorteddir $(dirname asper1/run1_mfzr/sorted/sortedinfo.tsv) --asvtable asper1/run1_mfzr/asvtable_default.tsv -v --log asper1/vtam.log
Traceback (most recent call last):
  File "/home/kc/miniconda3/envs/vtam/bin/vtam", line 8, in <module>
    sys.exit(main())
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/vtam/__init__.py", line 308, in main
    VTAM(sys.argv[1:])
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/vtam/__init__.py", line 158, in __init__
    CommandFilterOptimize.main(arg_parser_dic=arg_parser_dic)
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/vtam/CommandFilterOptimize.py", line 37, in main
    sqlalchemy.select([filter_lfn_reference.c.filter_id]).where(
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/_selectable_constructors.py", line 493, in select
    return Select(*entities)
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 5160, in __init__
    self._raw_columns = [
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 5161, in <listcomp>
    coercions.expect(
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/coercions.py", line 413, in expect
    resolved = impl._literal_coercion(
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/coercions.py", line 652, in _literal_coercion
    self._raise_for_expected(element, argname)
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/coercions.py", line 1143, in _raise_for_expected
    return super()._raise_for_expected(
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/coercions.py", line 711, in _raise_for_expected
    super()._raise_for_expected(
  File "/home/kc/miniconda3/envs/vtam/lib/python3.9/site-packages/sqlalchemy/sql/coercions.py", line 536, in _raise_for_expected
    raise exc.ArgumentError(msg, code=code) from err
sqlalchemy.exc.ArgumentError: Column expression, FROM clause, or other columns clause element expected, got [Column('filter_id', Integer(), table=<FilterLFNreference>, primary_key=True, nullable=False)]. Did you mean to say select(Column('filter_id', Integer(), table=<FilterLFNreference>, primary_key=True, nullable=False))?

I switched to a previous version of sqlachemy (I did not try to find out the latest working version, I assume it is related with the new major release 2.x.x version) and it solved the issue.

conda install sqlalchemy=1.4.39

I just thought this could help other people. Anyway, thanks for your work on this.

Make automatically a complete known_occurrences.tsv in filter

When using several different mocks and negatives, plus samples from different habitats, the preparation of know_occurrences.tsv is fastidious.

It would be nice to prepare automatically a know_occurrences.tsv file when running filter. This file could be revised manually by the user afterwards, but it serves as a solid base.

Plan:
run filter with two options
--mock_composition: requires a tsv file in the format of know_occurrences.tsv, but only keep occurrences are listed in all mocks
--sample_types: requires a tsv file as in the example bellow. All sample-run-marker combination should be listed

Marker Run Sample Sample_type habitat
MFZR run1 tpos1_run1 mock terrestrial
MFZR run1 tnegtag_run1 negatif NA
MFZR run1 14ben01 real freshwater
MFZR run1 14ben02 real freshwater

Based on these files prepare a known_occurrences.tsv with keep and delete occurrences as follows:

Keep occurrences:
Copy of mock_composition.tsv

Delete Occurrences:

  • For all mock samples (from sample_types.tsv) of the run_marker list all unexpected occurrences after filter
  • For all negative samples (from sample_types.tsv) of the run_marker list all occurrences after filter
  • habitat:
          -N_ih: Total number of reads of variant i in habitat h in the run_marker. Ignore NA
          -N_i': sum of all N_ih in the run_marker. Ignore NA, therefore N_i' = Ni - the number of reads in negative controls
    -If N_ih/N_i' <0.5 => wrong habitat (The variant i is a contaminant in habitat h)
          - List all occurrences in real samples where i is in the of the wrong habitat (skip if mock, since occurrences are already listed)

compressed input for 'filter'

Modify filter to accept compressed fasta input files (by default)

Make option to use unzipped I/O.

When running filter the second time on the same dataset, filling the database should not re-start.

Do not use 70% and 75% identity for taxassign

By examining taxassign outcomes, we have concluded that assignations made at 70% and 75% identity are meaningless. Do not make LTG, if it is not possible to do it with at least 80% identity.

Getting ValueError when running taxassign with NCBI db

Hi,

I got an error message when running the vtam taxassign command with a freshly downloaded NCBI db, I am working with Blast v2.13.0. Do you know what is the issue here? Thank you!

(vtam3) genpop@Genpop:~/anaconda3/envs/vtam3/12S_SG_2021$ vtam taxassign --db run1/db.sqlite --asvtable run1/asvtable_default.tsv --output run1/asvtable_default_taxa.tsv --taxonomy taxonomy.tsv --blastdbdir /media/genpop/My_Book/Marion/NCBI_db2 --blastdbname nt -v --log run1/vtam.log

Traceback (most recent call last):
File "/home/genpop/anaconda3/envs/vtam3/bin/vtam", line 8, in
sys.exit(main())
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/vtam/init.py", line 308, in main
VTAM(sys.argv[1:])
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/vtam/init.py", line 237, in init
blastdbname_str=blastdbname_str, params=params, num_threads=num_threads)
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/vtam/CommandTaxAssign.py", line 180, in main
params = None)
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/vtam/utils/RunnerTaxAssign.py", line 73, in init
blast_output_df = RunnerBlast.process_blast_result(blast_output_tsv)
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/vtam/utils/RunnerBlast.py", line 96, in process_blast_result
expand=True)
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/pandas/core/generic.py", line 5516, in setattr
self[name] = value
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/pandas/core/frame.py", line 3602, in setitem
self._set_item_frame_value(key, value)
File "/home/genpop/anaconda3/envs/vtam3/lib/python3.7/site-packages/pandas/core/frame.py", line 3729, in _set_item_frame_value
raise ValueError("Columns must be same length as key")
ValueError: Columns must be same length as key

pb of memory in pool

I have analyzed 12 runs for 1 marker. The db.sqlite is 5.3 Gb
When running pool for 6 runs, there is a memory error (see attached file)

Is it possible to reduce the memory need ?

Out_of_memory.txt

Random select merged reads

In Novaseq runs there are often far too many reads. According to my tests, 5-10 million reads for a run-replicate (ca. 96 samples) is enough. Above this number of reads the number of variants, average number of variants and reads per sample will not increase after the vtam filtering.
On the other hand, too many reads increase run time and can cause memory issues.
It would be nice to have either a separate command (after merge) or an option in merge to randomly select a user-defined number of reads from each output file of the merge. These reads will be the input of sortreads.

vtam incompatible with cutadapt==3.0

It looks like vtam and more precisely the "sortreads" is incompatible with the latest cutadapt==3.0. version
Until being fixed, it must be make sure that the cutadapt 2.10 is used.

Modify optimize_lfn_variant_replicate_specific.tsv and optimize_lfn_variant_specific.tsv

At the moment optimize_lfn_variant_replicate_specific.tsv and optimize_lfn_variant_specific.tsv contains all variants that have 'delete' occurrence in the know_occurrences.tsv

  • Do not print out lines, if lfn_variant_replicate_cutoff > max_lfn_variant_replicate_cutoff (new variable; 0.05 by default)
  • Do not print out lines, if N_i (or N_ik) < min_variant_read_count (new variable; 1000 by default)
  • If possible, flag lines if they are eliminated using the best combination of lfn_read_count_cutoff and lfn_variant_cutoff (lfn_variant_replicate_cutoff) suggested by optimize_lfn_read_count_and_lfn_variant.tsv (or optimize_lfn_read_count_and_lfn_variant_replicate.tsv)

Intermittent error when using taxassign with ncbi_nt

I have seen the issue [https://github.com//issues/9] and taxassign now works with a small test file (test1.txt attached) both using ncbi-nt and a custom database (blastn: 2.9.0+)

However, analysing a large file (large_test.txt), it gets through with the custom database, but not with nt. The log is found in the nohup.txt file. The ncbi nt was downloaded 2021-04-26, and the taxonomy file created by 'vtam taxonomy' on the same day.

This issue resembles the one in mantis: [https://sourcesup.renater.fr/plugins/mantis/view_mantis.php?group_id=4876&pluginname=mantis]

nohup.txt
test1.txt
large_test.txt

sortreads: improve demultiplexing by searching for all tag combination in parallel

Use following real demultiplexing instead of going through the same file separately for each tag combination. In cutadapt.v3, this can be done on multiple threads.

One cutadapt command for each input fasta file.

cutadapt --cores=0 -e 0 --no-indels --trimmed-only -g file:barcodes.fasta -o "tagtrimmed.{name}.fasta.gz" merged_file.fasta.gz

barcode.fasta file is the following format by default (anchored search)

>marker-run-sample-replicate
^tcgatcacgatgt...gctgtagatcgaca$

Add pigz to the conda environment and singularity recepie file so the output file can be zipped in multithreading mode. Otherwise there will be a bottleneck at zipping output files.

Pb of taxassign with ncbi nt database

vtam taxassign is running OK with custom database, but when using a freshly downloaded ncbi-nt (2021-04-26), I have an error message. (see bellow)

This is true when running vtam in an updated conda environment and also from singularity.

blast versions are the following:
Singularity> blastn -version
blastn: 2.5.0+
Package: blast 2.5.0, build Sep 20 2018 01:34:18

(vtam) emese@pcf-meglecz:~/singularity$ blastn -version
blastn: 2.2.31+
Package: blast 2.2.31, build Jan 7 2016 23:17:17

Error msg:

Traceback (most recent call last):
File "/opt/conda/envs/vtam/bin/vtam", line 8, in
sys.exit(main())
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/vtam/init.py", line 273, in main
VTAM(sys.argv[1:])
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/vtam/init.py", line 215, in init
CommandTaxAssign.main(db=db, mode=mode, asvtable_tsv=asvtable_tsv, output=output,
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/vtam/CommandTaxAssign.py", line 174, in main
tax_assign_runner = RunnerTaxAssign(
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/vtam/utils/RunnerTaxAssign.py", line 71, in init
blast_output_tsv = runner_blast.run_local_blast()
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/vtam/utils/RunnerBlast.py", line 68, in run_local_blast
stdout, stderr = blastn_cline()
File "/opt/conda/envs/vtam/lib/python3.9/site-packages/Bio/Application/init.py", line 569, in call
raise ApplicationError(return_code, str(self), stdout_str, stderr_str)
Bio.Application.ApplicationError: Non-zero return code 2 from 'blastn -out /tmp/tmps_xrub66/RunnerBlast.py/blast_output.tsv -outfmt "6 qseqid sacc pident evalue qcovhsp staxids" -query /tmp/tmps_xrub66/RunnerTaxAssign.py/variant.fasta -db nt -evalue 1e-05 -qcov_hsp_perc 80 -num_threads 8 -dust yes', message 'BLAST Database error: Error: Not a valid version 4 database.'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.