tyshiwo / fsrnet Goto Github PK
View Code? Open in Web Editor NEWDemo code for our CVPR'18 paper "FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors" (SPOTLIGHT Presentation)
Demo code for our CVPR'18 paper "FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors" (SPOTLIGHT Presentation)
The paper is very good.
Can you release the detailed structure of the prior estimation network?Thank you
The residual block,input 64 channels,output 128 channels.
Does that right?
About the HG blocks,what is the size of each layers?
Thank you very much .
Best wishes.
how to use the landmarks to process the gt parsing maps, i saw the previous issue, the method is called GFC, I can not find the paper , can u list it? thank you !
Hi
I'm trying to implement the trainig model without using the pre-trained one (using Python/Pytorch).
As I understood, the input LR images are resized to 128128 using BICUBIC interpolation. The training images are resized to 128128 (without BICUBIC interpolation). OK for those points ? But what about target images ? Do we have to resize them or let them with 1024*1024 size ?
Any description about this please ?
Thanks.
Hi, tyshiwo!
Thanks for your great work. When implementing your project, it confused me that why different models are needed for different dataset? As shown in your paper, the only input requested for this network in the test stage is just a LR image. Are there great differences between Helen dataset and celeba dataset, for example, the crop strategy, which lead to different models needed?
And I test helen dataset with celeba model and the contray, it turned out that different models for different datasets are indeed needed.
Hello, first of all, thank you for your great work!
And I'd like to know which way of interpolation is used when you resize the cropped HR image to 128*128?
Looking forward to your reply!
Hi, can you provide the code of how to crop the data helen? I have trouble in precessing them. Thank you.
Hi,
I find there is a conv layer with output size 197 in your provided model Helen_8x_cpu.t7.
th> model.modules[50]
nn.SpatialConvolution(128 -> 197, 1x1) [0.0001s]
As you paper states, here should be a 1x1 conv mapping hourglass feature map to generate the landmark heatmaps and the parsing maps.
Howerer, the number of ground truth landmarks is 194 and the number of parsing maps is 11. I don't understand where the "197" comes from.
Thanks for your reply.
Hi,
Can you provide a small setup requirements steps for ubuttu 18 ?
also can we test with cpu ?
Hi, Tai! I'm reading your paper and trying to reproduce your work recently. And my question is how to optimize the coarse SR network. Are the gradients coming from the end of the network?
And in the beginning of the net training, how to set the initial weights of the 3*3 conv layer in coarse SR network to ensure that the images generated are of relatively good quality then the origin low-resolution images.
Hi,I can't understand how to get the ground-truth priors? are they images, How can I generate them?
i tested the sh test_CelebA_8x.sh command on my PC,but it didn't work due to Can't open test_CelebA_8x.sh
Hello, could you provide the x4 SR pretrained model?
I hope i can test your pretrained model with the scale of 4 .
Thank you very much!
Thanks for your great work!
I want to train FSRNet on my own database. Could you please provide training codes?
Hi there,
Thanks for this repo!
A question though: In your paper you write "For celebA dataset, we use the first 18,000 images for training, and the following 100 images for evaluation."
However, in this repo you have 1000images for CelebA and those are the last 1000 of the dataset.
So do you use the image 000001.jpg up to 018000.jpg for training and the 018000.jpg to 018100.jpg for evaluation like you say in the paper or is it maybe a typo?
It would be nice if you could clear this up for reproducibility :-)
Thanks in advance for your help!
hi @tyshiwo thanks for your work.
when i run the model using my own img, i found the performance was not good.
the img i use is a result of resize the face img to (128, 128).
can you help me how to generate lr img in more detail?
thanks a lot
Hi, can you provide the training code?
I used 'sh test_CelebA_8x.sh' to run the model,but it tells me stdin:1: '=' expected near 'test_CelebA_8x',I don't know how to deal with it?
Dear Mr. Tai:
I am a graduate student of Anhui University of Technology in China. Recently, I read one of your articles, titled FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors in CVPR2018. But when I run the FSRNet Code which you have provided on the GitHub, I found that its performance is not good: the super-resolved images have a lot of noise. Will it be that you have changed the model and did not update it to GitHub? Could you send me a copy of you model? Thanks a lot! My Email is [email protected] the way, I also want to know how to get the ground-truth images from the CelebA and Helen dataset?
Thank you for you kind consideration of this request.
Sincerely: Zhengxiaoyu
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.