Giter Club home page Giter Club logo

Comments (21)

mourisl avatar mourisl commented on August 20, 2024 1

Thank you for the updates. Indeed, this is too slow. Speeding this up is one of the next goals..How long does it take to process one batch (16 chunks?). What are the inferred "Esimtated block size" and "dcv"?

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

How much memory do you have on your machine? I guess the job got killed due to the out-of-memory issue.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

I used 150G. What would be a reasonable amount?

It did not give any kind of memory error.

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

150G is not enough for nt database. I don't remember the total size of the current nt, but I think you may need a machine with more than 1TB memory to build the nt index.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

nt.fna is 1.5TB. Should I provide more memory than that?

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

You may need about 3TB memory for that then..

from centrifuge.

igordot avatar igordot commented on August 20, 2024

If I don't have that much, are there any alternatives?

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

How much memory do you have?

from centrifuge.

igordot avatar igordot commented on August 20, 2024

Theoretically 1.5TB, but it's a shared environment and unclear how long I would need to wait to actually get that.

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

If you have the files ready, you may try centrifuger (https://github.com/mourisl/centrifuger). It has the option "--build-mem" in "centrifuger-build". Perhaps you can try something like "--build-mem 1200G" or slightly more, which will try to find appropriate parameters so the memory usage is roughly within the given range. If you want to try centrifuger, please use the "git clone" to get the package. I recently accelerated the index building time efficiency, and the updated code will be in the next formal release. You can use "-t 16" for parallelization too.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

Is the Centrifuger index compatible with Centrifuge?

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

No, the underlying data structure is quite different, so the index is not compatible.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

It was not able to finish in 5 days with 500G. Does that seem reasonable?

from centrifuge.

mourisl avatar mourisl commented on August 20, 2024

500G might not be enough in the end. For 1.5T sequence, storing the raw sequences takes about 400G space, and representing the BWT can take another 400G, which is well over the memory allocation. I think much of the time will be spent on memory page swapping. I would still recommend to allocate memory as much as possible.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

I tried Centrifuger 1.0.2 with 1400G mem and 16 threads. In 15 days, it was able to extract 368/86745 chunks, so still far from complete. Probably a lot more memory is needed for this to be feasible in a reasonable amount of time.

from centrifuge.

igordot avatar igordot commented on August 20, 2024

This is what the last batch looks like:

[Tue May  7 20:21:06 2024] Postprocess 16 chunks.
[Tue May  7 20:26:58 2024] Extract 16 chunks. (352/86745 chunks finished)                                                                  
[Tue May  7 20:26:58 2024] Wait for the chunk extraction to finish.
[Tue May  7 22:39:48 2024] Submit 16 chunks.
[Tue May  7 22:39:48 2024] Chunk 0 elements: 16800433
[Tue May  7 22:39:48 2024] Chunk 1 elements: 16819322
[Tue May  7 22:39:48 2024] Chunk 2 elements: 16771855
[Tue May  7 22:39:48 2024] Chunk 3 elements: 16793011
[Tue May  7 22:39:48 2024] Chunk 4 elements: 16777181
[Tue May  7 22:39:48 2024] Chunk 5 elements: 16728464
[Tue May  7 22:39:48 2024] Chunk 6 elements: 16810964
[Tue May  7 22:39:48 2024] Chunk 7 elements: 16769117
[Tue May  7 22:39:48 2024] Chunk 8 elements: 16782750
[Tue May  7 22:39:48 2024] Chunk 9 elements: 16755439
[Tue May  7 22:39:48 2024] Chunk 10 elements: 16778579
[Tue May  7 22:39:48 2024] Chunk 11 elements: 16777549
[Tue May  7 22:39:48 2024] Chunk 12 elements: 16811242
[Tue May  7 22:39:48 2024] Chunk 13 elements: 16760250
[Tue May  7 22:39:48 2024] Chunk 14 elements: 16764790
[Tue May  7 22:39:48 2024] Chunk 15 elements: 16777083
[Tue May  7 22:39:48 2024] Wait for the chunk sort to finish.
[Tue May  7 23:47:24 2024] Postprocess 16 chunks.
[Tue May  7 23:55:30 2024] Extract 16 chunks. (368/86745 chunks finished)                                                                  
[Tue May  7 23:55:30 2024] Wait for the chunk extraction to finish.

from centrifuge.

khyox avatar khyox commented on August 20, 2024

FYI, our pre-print accompanying the release of a new Centrifuge nt database is online now: Addressing the dynamic nature of reference data: a new nt database for robust metagenomic classification. Any feedback will be welcome!

from centrifuge.

nicolo-tellini avatar nicolo-tellini commented on August 20, 2024

Hi @khyox ,

I tried to download the db using wget but it stopped at 071 compressed files.
In how many files is the db divided ? Is there a more appropriate way to download so that it can recover from where it stopped ?

best

nic

from centrifuge.

khyox avatar khyox commented on August 20, 2024

Hi @nicolo-tellini,

You should have them all then! We added the next line to the Data availability section of the manuscript to clarify that:

To ease the download process, the database is split in 71 ultra-compressed 7z files of 4 GiB or less with name format nt_wntr23_filt.cf.7z.*

Let me know if you aren't seeing that line in the version of the pre-print that you're working with.

The idea of splitting in 4 GiB files is that it should be easy to recover after a failure without a big loss, as you just keep all the downloaded files except the last partially downloaded file and download from that one ahead.

from centrifuge.

nicolo-tellini avatar nicolo-tellini commented on August 20, 2024

Hi @khyox ,

I see, yeah thanks. I am sorry my bad, I am not familiar with 7z.

from centrifuge.

khyox avatar khyox commented on August 20, 2024

No problem, @nicolo-tellini, thanks for asking! :)

from centrifuge.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.