Giter Club home page Giter Club logo

eccv2018_crossnet_refsr's People

Contributors

htzheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

eccv2018_crossnet_refsr's Issues

Asking the code for getting the input data in .h5 format

Can you upload the code file, which is used to transfer the original picture data to the .h5 format or give one example of the h5 files? Because I don't know what are concrete conditions is in one h5 file. Thank you for your attention!

What's the difference between the three model of MultiscaleWarpingNet?

i see the training code writes: net_pred = net(buff,mode = 'input_img1_HR')
and i lookup the model definition and i see:
if mode == 'input_img2_LR':
input_img2_LR = torch.from_numpy(buff['input_img2_LR']).cuda()
flow = self.FlowNet(input_img1_LR, input_img2_LR)
elif mode == 'input_img2_HR':
flow = self.FlowNet(input_img1_LR, input_img2_HR)
elif mode == 'input_img1_HR':
flow = self.FlowNet(input_img1_HR, input_img2_HR)
as far as i understood, input_img1_LR is the low resolution image, input_img1_HR is the corresponding groundtruth of input_img1_LR, and the input_img2_HR is the high resolution reference image, am i right?
so when i use the model in application, i can only get input_img1_LR and input_img2_HR, but when you set the mode as 'input_img1_HR', the flow is estimated from input_img1_HR and input_img2_HR.
Did I make some mistake on understanding the code? Or is there some bug in the code?

How to test another dataset

I trained the flower dataset with your network and now i want to test with other dataset which contains different flowers.
I found that your code load each train and test dataset but i cant' find where the loaded test dataset is used. I want to test my image with the checkpoint file that i trained.
Furthermore, if i want to see the result HR image where can i found the result image? I just fix the code in "train_multi_warping.py" like below, but i can't assure that this is correct way to see the HR output image.

        for i in range(pre_npy.shape[0]):
            psnr_ += psnr(pre_npy[i],label_img_npy[i]) / pre_npy.shape[0]
	                
	img_bgr = pre_npy[i].transpose(1,2,0)
	img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
	#cv2.imshow('result', img_rgb)
	imgplot = plt.imshow(img_rgb)
	plt.show()
	#cv2.imwrite('./result/image_'+str(i)+'.png', img_rgb)
	#cv2.waitKey(500)
       	print (i,psnr(pre_npy[i],label_img_npy[i]))

I also try to change "train_multi_warping.py" as shown below to test image, but error comes out that "too many indices for tensor of dimension 4".

        buff_val = dataset_test.nextBatch_new(batchsize=config['batch_size'], shuffle=True, view_mode = 'Random', augmentation = False, offset_augmentation=config['data_displacement_augmentation'], crop_shape = config['train_data_crop_shape'])
        
        val_img1_LR = buff_val['input_img1_LR']
        val_img2_HR = buff_val['input_img2_HR']

        val_img = np.concatenate((val_img1_LR,val_img2_HR),axis = 1)
        val_label_img = buff['input_img1_HR']

        val_img = torch.from_numpy(val_img)
        val_img = val_img.cuda()
        with torch.no_grad():
            val_pred = net(val_img) 
            val_pred_npy = val_pred.cpu().numpy()
            psnr_ = 0
            for i in range(val_pred_npy.shape[0]):
                psnr_ += psnr(val_pred_npy[i],val_label_img[i]) / val_pred_npy.shape[0]
                print (i,psnr(val_pred_npy[i],val_label_img[i]))      

About Network Structure

I am very interested in your work, but I have some confusion about the structure. Since super-resolution is a pixel-level task, the down-sampling operation of encoder in U-Net[44] structure will result in loss of information and should be avoided as much as possible. Why do you think it can be applied to SR? Thank you!

No module named PWCNet

Hi, i am very interested in your paper and codes.

However, whenever i try to train image with your code, the following errors come out.
"No module nameed PWCNet"

I thought the PWCNet is necessary for training, since your paper mentioned that 'Flownet', which contains PWCNet module, works for computing optical flow between light-field data. So I installed module name but it doesn't work with same error as i mentioned above.

Also, looking over your codes, i found that PWCNet is only written in 'init.py'. So I wonder if this module really needs to run your code, or if i have to install this module, how can i fix this problem?

How to organize the input images and test

Hi,I want to know how to organize the input h5 file,the img_MDSR should be prepared by myself? Does it mean I have to run MDSR to get the image first? And I really want to know the details about how to test singe image.Looking forward to your reply.

How to test(how to edit code for testing)

Hi, I am really interested with your network.
I already trained your network with several datasets, and now i want to test the image with checkpoint file. However, i am not sure how to test image. I found that there are some lines for testing in 'train_multi_warping.py' and I tried to uncomment such lines, but i don't know which lines should or should not be uncommented for testing.
I really appreciate if you guide me how to test image with trained model.

test image

if I want to test a single image on this network,how to do it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.