liaohaofu / adn Goto Github PK
View Code? Open in Web Editor NEWADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction
License: Other
ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction
License: Other
the generated artifact image and the original artifact image has a different,that the original artifacts always around with bone structure,but the generated artifacts not always around with the bone structure,this make discriminator easy to judge,and the generator tend to transfer artifact and bone structure together to normal image,and so the bone structure will also be removed together with artifacts。
do you run into this problem?and how can I solve this?thank you!
I have before training batch size from 4 to 2, I have been watching the training of GPU memory usage, and I was not open other programs use GPU memory, top 10 epoch are no problem, but when I'm training to 11 epoch. 60 a step, I saw the memory directly occupancy rate reached 100%, is 90% before, and then an error, I think if you use two pieces of GPU, thank you
I am wondering how did you set the hyperparameters when you trained MUNIT ? Thanks a lot
Hello, I would like to ask about the picture with metal artifact after data processing. Its overall color has changed a lot compared with the original picture, is it because there are other parameters that need to be adjusted?
Can CTparas parameters in data preprocessing be changed, such as SOD,Angnum, angsize, etc
I would like to reproduce the effect of your comparison experiment, such as CNNMAR, Li, BHC and NMAR. Could you please give me the code? I have also read your other articles about the removal of artifacts.I am particularly interested in this artifact, but just contacted, I hope you can help me.
Use in your paper, adn U -.net on your data, I want to know your U - net loss function, vector, epoch of configuration, because I refer to your given U - net structure was not related Settings, when I was in use some general Settings (such as:, I am using the l2 / l1 loss, etc.), did not achieve similar results and your experiment, so please you inform your detailed experimental configuration.
In addition, since there are no pairs of data in the real data set to support the training, you have also given the relevant experimental results, how your experimental data were generated and how the relevant parameters were set.
Looking forward to your reply
Use in your paper, adn U -.net on your data, I want to know your U - net loss function, vector, epoch of configuration, because I refer to your given U - net structure was not related Settings, when I was in use some general Settings (such as:, I am using the l2 / l1 loss, etc.), did not achieve similar results and your experiment, so please you inform your detailed experimental configuration.
In addition, since there are no pairs of data in the real data set to support the training, you have also given the relevant experimental results, how your experimental data were generated and how the relevant parameters were set.
Looking forward to your reply
Does your code support Multiple GPU training?
I ran your code, but this problem occurred, because It was ok when I ran it before. I recently installed a new environment, I wonder whether the current pytorch version does not fit your code or what
Here's the problem
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 256, 4, 4]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
When you're training U-Net with a real data set, how do you generate a mask。
thank you
Hey haofu,
Thanks for your release of the ADN code, it really helps me a lot in the MAR research.
I also find another paper called DuDoNet.
It seems they have the same setting, but the performance is different. E.g, the LI performance in ADN is 32dB (PSNR), while the dudonet only 25.5dB.
Do they exploit different experimental settings? I found the source image is 512x512, and the CNNMAR use 512x512 for testing, while DuDonet use 416x416, and the default setting in ADN code is 256x256,
Is 256x256 exploited in ADN? And the normalized way in the default setting?
thanks!
have no gradient penalty term?
Thank you very much for sharing the source code, which is very useful for me. However, I reported an error when preprocessing the data, which did not make the data classified, but just created two empty files:train and test
I have been stuck in an error showing up when trying to train the model. The dataset that I used is spineweb
. The error is as follows:
The traceback shows that there exists an inplace operation that modify the node in the computational graph.
The configuration of my environment: operating system: debian 10, pytorch version: 1.7, python version: 3.7.10.
I'm sorry that I shouldn't have bothered you, but the original author of this article is not replying now. I would like to ask how you reproduced his production data set, or how you generated the data set with his method when using the spine data set, and how you used code to simulate images with metal artifacts.Can you share your code?You can check out his github question, but he didn't respond.I'm still a green hand, I hope you can help me
Thank you very much
Hello, there is a question that bothers me, can the test result be a 3D picture, thank you for your answer.
Hello, thanks for your work, it helps me a lot! I would like to ask if your code can be trained on multiple GPUs? I tried to modify the code, but it still didn't work on multiple GPUs.
Line 96 in 61d2caa
Hi haofu,
It is really a brilliant work in this paper. When I do some work base on you code. I found some problems in my opinion. I can not determine if they are right. So, I come to this issue to communicate with you. Sorry to bother you.
1 The first is line 83, "def _update(self, backprop=True):", in your "models/base.py".
I think we should do "backprop_off(self)" after we optimize the network, and then "backprop_on(self)" via "def _clear(self, backprop=True)" in next iteration. But there seems not "backprop_off" the Discriminator when we train ADN.
2 question about the implemenation of cycle_loss.
When train ADN network. You encode normal content information from Fake Artifact_image, and then decode it back to image without artifact. Cycle-loss get!
But I am not sure will the Second forward will broke the artifact reconstruction loss backpropgation. Because the feature has changed in my opinion.
These are questions that bother me. Hope got your reply, and thank you very much~
Best regards,
pengbol
since I have compatibility problem as in previous issues, I would like to know which versions of pytorch were originally used to avoid errors.
Hello,sir.
According to README.md, I don't understand how to prepare data for spineweb. Can you give me some guidance?
google driver被强了上不上去。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.