Comments (17)
Hi Michel,
Regarding the MarginPolish error: @tpesout can help you with it.
HAC model: All models are trained on HAC base-called data. We are trying to assess if the fast prediction model, and it's applicability.
And the model: we are trying to assess and provide a model for the latest guppy version, but the summer hiatus and the expense of base-calling are holding us back. As we re-group after summer and get the new base called data, we will publish updated models. For now, guppy 2.3.3 would be the one to use for 3.0.2 as the RLE confusion matrix matches closely.
from helen.
I was able to replicate the model error you were getting, can you delete the local copy of your guppy 235 file and do this:
wget https://raw.githubusercontent.com/UCSC-nanopore-cgl/MarginPolish/master/params/allParams.np.human.guppy-ff-235.json
This downloads the raw json file and makes sure you don't download html content.
from helen.
Great, thanks for fixing this. Works fine now!
Will just run a few model comparisons on my guppy305-basecalled data to see what performs best.
I also have some datasets from different basecalling models (from older and newer firmware). Not sure yet how to apply HELEN to such cases. Do you have any suggestions?
I assume marginPolish and HELEN's main source of confidence in consensus is coverage, so if i split the dataset into sets of same-guppy-basecalled data, i will probably loose a lot of "consensus-power" as coverage drops.
from helen.
@MichelMoser, we describe why we need different models for different versions of basecallers:
https://www.biorxiv.org/content/biorxiv/early/2019/07/26/715722.full.pdf
If you look at figure 18 on page 41 you'll see that the confusion matrix for two basecallers is different. I think the 3.0.5 is the closest to 2.3.3 so you would get the best results for 3.0.5 with 2.3.3. Do you have training data (HG002) for these basecaller versions? if yes then you can train models for each version and use that.
from helen.
@kishwarshafin, yes, i totally understand why different models are needed to match the different guppy basecall-models. I wonder how i could get optimal polishing results with mP+H for genomes assembled from a combination of different nanopore reads (guppy233-called and guppy235-called). We work with non-model organisms (fishes) without reference genomes, so training is difficult to do.
from helen.
I see. Sorry, I misinterpreted your question. Ok, in such case you also need to consider that different basecallers report different base qualities and the model rely highly on the reported qualities from the basecaller. It is extremely important that you pick the right model for the right basecaller. I would suggest that you gather all your data and basecall with a single guppy basecaller i.e. 3.0.5 then when we release a 3.0.5 or higher basecaller model you use that. Mixing 233 and 235 will make the model perform badly.
It's a bit frustrating to keep up with all the frequent upgrades of basecaller but that's the best we can do now.
from helen.
I can confidently say that I was wrong about my statement last month. I ran a simple test by mixing reads from 235 and 233 and performed polishing and it looked fine. So, you can mix your reads and do the polishing to get better results. Sorry for being so late on this.
from helen.
Thats great news. Thanks for the follow-up.
You did the polishing of the mixed reads using the 235 model?
from helen.
Yes, the data was 233 + 235 mixture and model was 235. Also, if you want to wait, we are working on a 305 model, should be able to deliver in a week or so.
Thanks for your patience.
from helen.
Great, cant wait for this!
from helen.
The marginPolish model is updated to the master and here's the HELEN model for 305:
HELEN_guppy305_model
There are still a few tests that we need to do before making it public but the initial tests look very good. :-) Good luck!
from helen.
Great! Already started a run to see how it compares to previous models. BUSCO will tell =)
Thank you so much!
from helen.
Great! Please share the results if you can.
from helen.
Hi,
I did a comparison of newest models for HELEN guppy305) and marginPolish with the previous one (guppy235) and medaka (r941_prom_high)
Unfortunately, the guppy305 model did not perform well on our data.
Did you use MinION data for training? Because we exclusively have PromethION data for our genomes.
Here the overview:
allParams.np.human.guppy-ff-305.json+guppy305_hg002_splitRleWeight.pkl C:80.3%[S:77.2%,D:3.1%],F:11.1%,M:8.6%,n:4584
allParams.np.human.guppy-ff-235.json+r941_flip235_v001.pkl C:86.4%[S:83.4%,D:3.0%],F:5.5%,M:8.1%,n:4584
racon (1 round) + medaka C:90.2%[S:87.1%,D:3.1%],F:3.9%,M:5.9%,n:4584
marginPolish tpesout/margin_polish:latest
HELEN commit 84f3575
medaka version 0.9.1
racon version v1.4.7
from helen.
Hi @MichelMoser ,
I have a few questions for you:
- I believe it's a non-model organism but do you have a good assembly of that organism that can be used as a reference?
- Will you be willing to do a test with another version of MP+HELEN that is in development or be willing to provide us the reads and the assembly so we can do it for you?
From our findings, we saw that BUSCO is not a great metric for assembly accuracy. But this big of a difference is concerning.
from helen.
Hi,
I was surprised by the result as well.
- Yes I tested in on a non-model organism because it has a relatively small genome (700 Mbp) and the nanopore assembly is very good (N50 of 18 Mbp). Unfortunately we currently only have nanopore data for this species available and Illumina data is yet to come.
What kind metric do you prefer to assess assembly accuracy? Comparison to Illumina contigs?
- Yes, i would be happy to test other versions or share datasets. We have multiple de novo genome assemblies and nanopore datasets from species (called with either guppy 2.2.3 or 3.0.5) with different genome sizes. I could switch to a larger genome (2.5 Gbp) where nanopore + Illumina data is available and references exist.
Let me know what you think. email: michel.moser at nmbu.no
from helen.
I'll follow up with you over the email and close this issue here as it's non-related to the pipeline.
from helen.
Related Issues (20)
- Plant species HOT 3
- MarginPolish HOT 3
- margin docker run fail HOT 27
- Issue on stitch.py HOT 7
- Compatibility with draft assembly from other assemblers HOT 3
- Models for Guppy 3.6 HOT 2
- some problem occurs when evaluating the data of na12878 chromosome 21 HOT 4
- Add helen to bioconda. HOT 5
- Are existing models suitable to polish a high heterozygosity plant genome assemblies? HOT 3
- marginpolish docker stuck at 99% HOT 13
- How to install helen in a gcc4.8 enrivments? HOT 3
- MarginPolish output zero .fa file HOT 9
- torch.set_num_threads error in docker image HOT 3
- How to train a new model with in lab testing data HOT 2
- Exception: process 6 terminated with signal SIGKILL HOT 5
- wrong script in example Usage in README.md HOT 1
- Only Polish interested sequences HOT 3
- Question about model's training data HOT 2
- there are no other mammal models other than human? HOT 2
- helen polish UserWarning on batch size
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from helen.