dulucas / displacement_field Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of paper "Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields" (CVPR2020)
Official implementation of paper "Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields" (CVPR2020)
Do you have other ways to download this dataset?Thank you.
In both the RGB guided version and the depth only version, you divide the UNet output by 320, could you please explain how you came up with this number, and why this step?
Many thanks!
Hello, thanks for sharing your great work.
Can you be so kind and send me the pre-trained model to test?
Regards
Hello, want now to test a different group of images.How can I do that ? Should I rewrite the code ?Thanks
I do not know the reason
Is this method supervised? Do we need real ground truth of depth as the surveillance information?
I train the data as you describe in readme file I trained but now how to see out put <- this my question
plz help me
Hi, I write a demo code to do a test. The model is trained using the code in this repo. However, I find the visulization of the result is not as good as imagined. I was wondering if I did something wrong. Here are the demo code and testing images.
Code:
import numpy as np
import torch
import cv2
import matplotlib.pyplot as plt
from df import Displacement_Field
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Displacement_Field()
pth_path = 'xxx/Displacement_Field/log/nyu/df_nyu_rgb_guidance/snapshot/epoch-19.pth'
model.load_state_dict(torch.load(pth_path)['model'])
model.to(device)
model.eval()
image = cv2.imread('./image.png', cv2.IMREAD_GRAYSCALE)
depth = cv2.imread('./depth.png', cv2.IMREAD_GRAYSCALE)
image = torch.from_numpy(image.astype(np.float32) / 255.0).unsqueeze(0).unsqueeze(0).to(device)
depth = torch.from_numpy(depth.astype(np.float32) / 255.0).unsqueeze(0).unsqueeze(0).to(device)
depth = (depth - depth.min()) / (depth.max() - depth.min())
pred = model(image, depth)
pred = pred.detach().cpu().numpy()[0, 0]
plt.imsave('./result.png', pred, cmap='gray')
Hi. Can you provide already trained model weights?
Thanks for your work.
I faced some problem when executing "download.sh". Terminal reports:
"Connecting to horatio.cs.nyu.edu (horatio.cs.nyu.edu)|216.165.22.17|:80... failed: Connection refused".
Hi, in your paper, is the refinement model trained on different depth estimation methods separately, or done in the same way with this repo?
why is batch size set to 1? I tried a bigger value than 1, and the output is very strange.
Thanks for your work.
I have trained the model, is there any testing code? for example, How can I test my trained model on the NYUv2OC++ dataset ?
Hi! I have a question about how to visualize the displacement field! Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.