Giter Club home page Giter Club logo

Comments (4)

Shians avatar Shians commented on June 20, 2024 1

Thanks for the suggestion. Unfortunately I think for my personal workflow it might be too messy to have to selectively process some pod5s like this, my current process creates folders like this

├── data
    └── pod5_links
        ├── block_1  # full of symlinks to pod5s
        ├── block_2 # full of symlinks to pod5s
        └── ...

That way I can just use the block_# folder as argument to each dorado call. It would add too much complexity for my liking.

from pod5-file-format.

HalfPhoton avatar HalfPhoton commented on June 20, 2024

Hi @Shians, yes absolutely.

If you want approximately similar sizes then you can get a more performant workflow versus exactly N reads.
Depending on your pipeline - (assuming it's something to do with basecalling) i'd recommend more reads than 4_000 records as there is a non-zero setup time for dorado as it needs to load the model and reference etc.

Tip

If you'rejust basecalling these records with dorado - Instead of subsetting files and cloning the data which is very IO intensive and can be slow. Use the -l, --read-ids A file with a newline-delimited list of reads to basecall. argument which will search for the read ids in the whole dataset and distribute the jobs by indexing ids instead of by providing complete seperate inputs.

Approximate sizes suggestion

This will be much quicker as merging is simple and requires no searching for specific records.
Please edit for your specific needs. This is an untested example but should be sufficient to show what to do.

pod5 view data/ --include "filename" | sort | uniq -c  > records_per_file.txt

head records_per_file.txt
100 file1.pod5
1000 file2.pod5
1234 file3.pod5
...

# Writes all the filenames to output_X.txt
awk -v N=<VALUE>'{
    sum += $1
    printf "%s", $2 > "data/output_" file_count ".txt"
    if (sum >= N) {
        sum = 0
        file_count++
    }
}' records_per_file.txt

for OUT in $(find . -iname "output_*.txt"); do
  NEW_POD5="${OUT%.txt}.pod5"
  pod5 merge $(cat $OUT) -o $NEW_POD5
done;

Exact subsetting into N equally sized batches

pod5 view data/ -IH > all_read_ids.txt
split all_read_ids.txt -n l/${BATCHES} batch. -a 4 -d --additional-suffix .txt
echo "read_id,dest" >> mapping.csv
for BATCH in $(find . -iname "batch.*txt"); do 
  NEW_POD5="${BATCH%.txt}.pod5"
  awk '{print $1 "," $NEW_POD5}' >> mapping.csv
done

pod5 subset data/  --table mapping.csv --columns dest 

from pod5-file-format.

Shians avatar Shians commented on June 20, 2024

Thanks for the fast reply! My current workflow batches jobs up by folders, I take all the existing pod5 files and create symbolic links into folders each containing an equal number of pod5 files, then run dorado on the folders containing links so I don't duplicate the data.

Your solutions are helpful, and I think I will adapt them into a strategy where I identify all files >1GB and break only those files up into 1GB pod5s. Perhaps also to aggregate files that are too small <100MB with merge.

from pod5-file-format.

HalfPhoton avatar HalfPhoton commented on June 20, 2024

@Shians,
Please try using the -l, --read-ids dorado argument.
You can pass a symbolic link to the same pod5 file but instruct dorado to basecall only the first half of read ids and then have another worker basecall the other half. This way you don't need to duplicate any input data.

Something like this - but please make sure you're not missing / duplicating read_ids when splitting the read ids.

# get all read_ids from pod5 quickly
pod5 view my.pod5 -IH > all_read_ids.txt
# calculate total number of read_ids
NUM_READS=$(wc -l all_read_ids.txt)
# split into two parts
head -n $(NUM_READS/2) > first_half.txt
tail -n $(NUM_READS/2) > second_half.txt

# first worker
dorado basecall <model> my.pod5 --read-ids first_half.txt ..... > first_half.bam
# second worker
dorado basecall <model> my.pod5 --read-ids second_half.txt ..... > second_half.bam

from pod5-file-format.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.