Comments (27)
Hi - I have now had a chance to look into the Cityscapes issue a bit more closely.
I have been able to reproduce the results in the paper, but it requires a few tweaks:
- batch size of 8
- earlier freezing of the teacher + pose networks
- using evaluation batch norm statistics for the teacher + pose networks after they are frozen
Even with this changes, it appears that Cityscapes scores are very sensitive to initialisation.
For the paper the main experimentation was done on KITTI so I didn't realise quite how much the scores can vary on Cityscapes - I would guess that it is because of the far greater number of moving objects, something which Monodepth2 (our teacher network) can struggle with. I presume that for Cityscapes, using a teacher network which can better handle moving objects would be highly beneficial (e.g. using optical flow etc), and would allow ManyDepth to perform even better.
Regardless, I've made some changes to the code and will push shortly which will:
- allow for freezing the teacher based on step number rather than epoch number
- change the batch norm as described above
- allow random seeding to set via the command line
I hope that this will allow you to reach similar scores to the paper.
Thanks a lot,
Jamie
from manydepth.
By the way, the code train the model with resolutin of 192 × 512. But the paper is 128 × 416. Is it the same?
from manydepth.
from manydepth.
Hi, thanks for your interest in the project!
Okay interesting - have you pulled the latest changes? There was a bug a few weeks ago which meant that the teacher network + pose network were not frozen, which would lead to a deterioration in accuracy - especially for the cityscapes dataset.
If that is not the issue then please let me know and I will do some further investigation.
And about the resolution - we train at 192x512 as per SfmLearner data preparation (i.e. just crop out the ego car), but at test time we crop our predictions to match the 128x416 region of other works (this crops out some of the top and sides of the image). If you take a look at line 340 of evaluate_depth.py
it should be more clear what is happening.
Thanks a lot
from manydepth.
Hi, thanks for your interest in the project!
Okay interesting - have you pulled the latest changes? There was a bug a few weeks ago which meant that the teacher network + pose network were not frozen, which would lead to a deterioration in accuracy - especially for the cityscapes dataset.
If that is not the issue then please let me know and I will do some further investigation.
And about the resolution - we train at 192x512 as per SfmLearner data preparation (i.e. just crop out the ego car), but at test time we crop our predictions to match the 128x416 region of other works (this crops out some of the top and sides of the image). If you take a look at line 340 of
evaluate_depth.py
it should be more clear what is happening.Thanks a lot
Thanks for replying! Could you show more details about the changes you mentioned? I'm not sure if I pull this change, but I just clone this code two days ago so I guess it's not the reason.
from manydepth.
@JamieWatson683
Hi! I figure it out that I've pulled the changes and it's not the reason network doesn't achieve SOTA.
from manydepth.
By the way, should I add --freeze_teacher_epoch 15
when I train for the KITTI?
from manydepth.
Hi, thanks for your interest in the project!
Okay interesting - have you pulled the latest changes? There was a bug a few weeks ago which meant that the teacher network + pose network were not frozen, which would lead to a deterioration in accuracy - especially for the cityscapes dataset.
If that is not the issue then please let me know and I will do some further investigation.
And about the resolution - we train at 192x512 as per SfmLearner data preparation (i.e. just crop out the ego car), but at test time we crop our predictions to match the 128x416 region of other works (this crops out some of the top and sides of the image). If you take a look at line 340 of
evaluate_depth.py
it should be more clear what is happening.Thanks a lot
Thanks for the awesome works!
Here, do you mean the changes in
manydepth/manydepth/trainer.py
Lines 213 to 220 in 893b7a9
As we set train_teacher_and_pose =False
after 15 epochs
manydepth/manydepth/trainer.py
Line 210 in 893b7a9
the forward operation of
Teacher
and Pose
net will run with torch.no_grad
. Do we really need these changes to fix the bug
?
Thanks!
from manydepth.
Hi, thanks for your interest in the project!
Okay interesting - have you pulled the latest changes? There was a bug a few weeks ago which meant that the teacher network + pose network were not frozen, which would lead to a deterioration in accuracy - especially for the cityscapes dataset.
If that is not the issue then please let me know and I will do some further investigation.
And about the resolution - we train at 192x512 as per SfmLearner data preparation (i.e. just crop out the ego car), but at test time we crop our predictions to match the 128x416 region of other works (this crops out some of the top and sides of the image). If you take a look at line 340 ofevaluate_depth.py
it should be more clear what is happening.
Thanks a lotThanks for the awesome works!
Here, do you mean the changes in
manydepth/manydepth/trainer.py
Lines 213 to 220 in 893b7a9
As we set
train_teacher_and_pose =False
after 15 epochsmanydepth/manydepth/trainer.py
Line 210 in 893b7a9
the forward operation of
Teacher
andPose
net will run withtorch.no_grad
. Do we really need these changes to fix thebug
?
Thanks!
The code has been fixed. And your changes is correct.
Now I can see freezing teacher and pose networks!
after 5 epochs.
By the way, can you reproduce the results for cityscape?
from manydepth.
The code has been fixed. And your changes is correct.
Now I can see
freezing teacher and pose networks!
after 5 epochs.By the way, can you reproduce the results for cityscape?
Sorry, I did not try it on Cityscapes, I got slightly worse results on KITTI (absrel 0.101 vs 0.098 in paper)
So I'm curious about the freezing mode
. And I think it's not really a bug which needs to be fixed.
I wonder why we need these changes. Does chese changes matter much on your experiments?
from manydepth.
The code has been fixed. And your changes is correct.
Now I can seefreezing teacher and pose networks!
after 5 epochs.
By the way, can you reproduce the results for cityscape?Sorry, I did not try it on Cityscapes, I got slightly worse results on KITTI (absrel 0.101 vs 0.098 in paper)
So I'm curious about the
freezing mode
. And I think it's not really a bug which needs to be fixed.
I wonder why we need these changes. Does chese changes matter much on your experiments?
Actually I've done the same experiments and if you train for 30 epochs you can get a better result which is nearly the same as paper.
As for freezing, my experiments show that it has less impact on KITTI but is important for Cityscape. If you train for KITTI, it doesn't matter.
from manydepth.
Actually I've done the same experiments and if you train for 30 epochs you can get a better result which is nearly the same as paper.
As for freezing, my experiments show that it has less impact on KITTI but is important for Cityscape. If you train for KITTI, it doesn't matter.
Thanks for your reply! It makes sense, I'll have a try for more epochs.
from manydepth.
Actually I've done the same experiments and if you train for 30 epochs you can get a better result which is nearly the same as paper.
As for freezing, my experiments show that it has less impact on KITTI but is important for Cityscape. If you train for KITTI, it doesn't matter.Thanks for your reply! It makes sense, I'll have a try for more epochs.
Hi!
I contacted with Jamie and we figured it out that the teacher isn't frozen. So the code has bug and he will fix it later.
from manydepth.
@JamieWatson683 @agenthong, hello, may i ask you about the setup on cityscapes. especially about STEREO_SCALE_FACTOR in evaluate_depth.py 31 line. Because i use setup as (0.22x2262)/(pred_dispx2048) to get pred_depth.
from manydepth.
Actually I've done the same experiments and if you train for 30 epochs you can get a better result which is nearly the same as paper.
As for freezing, my experiments show that it has less impact on KITTI but is important for Cityscape. If you train for KITTI, it doesn't matter.Thanks for your reply! It makes sense, I'll have a try for more epochs.
Hi!
I contacted with Jamie and we figured it out that the teacher isn't frozen. So the code has bug and he will fix it later.
@JamieWatson683 @agenthong Could you share what exactly is the bug you mentioned? I tried to train CityScape and only get AbsRel=0.137
from manydepth.
Actually I've done the same experiments and if you train for 30 epochs you can get a better result which is nearly the same as paper.
As for freezing, my experiments show that it has less impact on KITTI but is important for Cityscape. If you train for KITTI, it doesn't matter.Thanks for your reply! It makes sense, I'll have a try for more epochs.
Hi!
I contacted with Jamie and we figured it out that the teacher isn't frozen. So the code has bug and he will fix it later.@JamieWatson683 @agenthong Could you share what exactly is the bug you mentioned? I tried to train CityScape and only get AbsRel=0.137
Unlikely, this is the best result I get till now. So I can't reproduce the result on cityscapes.
from manydepth.
Hi - really sorry for the delay in getting back on this, I was looking at it, but then had CVPR submission and some vacation to take. I will get back to this in early January, and hopefully push a fix!
from manydepth.
Hi - really sorry for the delay in getting back on this, I was looking at it, but then had CVPR submission and some vacation to take. I will get back to this in early January, and hopefully push a fix!
@JamieWatson683 currently do you have any ideas about this issue? Any insights are greatly appreciated
from manydepth.
Thanks for helping us reproduce the results! I'll try this later.
BTW, do you think the teacher network is still necessary for indoor scenes? In my opinion, the mono network is mainly for moving objects, which disappear in the indoor environment. As for untextured area, I assume it could perform better with other methods.
Thanks!
from manydepth.
Hi - I have just pushed an update now with the things I mentioned above. Annoyingly the random seed setting doesn't remove all randomness, but seems to give more stable results.
I have managed to obtain scores similar to those in the paper (albeit with some variation) using the below commands:
- batch_size 8
- pytorch_random_seed 1
- freeze_teacher_step 14000
You can optionally add the --save_intermediate_results
to save out the model at each logging step (every 2000 steps). This will allow you to do some digging into how the scores behave over time - it might be interesting to see how the teacher network is scoring, as the performance of this is what leads to good/bad scores of MnayDepth on Cityscapes.
I'll update the readme with these tips if you can also reach reasonable scores.
@agenthong - for indoor scenes, yes I'd imagine you won't need the teacher network as it is mainly used for moving objects. It might help in untextured regions, but certainly worth experimenting with! My concern with indoor data would be:
- the pose network may not work very well for arbitrary camera motion - do you have posed images? Or can you run a SLAM system to estimate poses?
- just using the previous frame for building the cost volume isn't likely to work all that well, you may want to implement a keyframe selection algorithm such as from DeepVideoMVS.
Either way, let me know how you get on!
from manydepth.
@agenthong @fengziyue @DemingWu
Thanks for helping us reproduce the results! I'll try this later.
Have you achieved the results described in the paper and figured out the reason of poor results?
from manydepth.
@agenthong Hi, how do u evaluate on Cityscapes. I didn't find relevant evaluation code in this project. Looking forward to your reply. Thanks a lot!
from manydepth.
Hi @ZhuYingJessica - details on how to evaluate on cityscapes are in the readme.
Let me know if you have any other questions!
from manydepth.
Hi! Thanks for your reminding! I wonder why Min_depth and Max_depth are still 1e-3 and 80 during evaluating Cityscapes dataset. It seems the Max_depth is larger in Cityscapes. Looking forward to your reply.
Hi @ZhuYingJessica - details on how to evaluate on cityscapes are in the readme.
Let me know if you have any other questions!
from manydepth.
Hi! Thanks for your reminding! I wonder why Min_depth and Max_depth are still 1e-3 and 80 during evaluating Cityscapes dataset. It seems the Max_depth is larger in Cityscapes. Looking forward to your reply.
Hi @ZhuYingJessica - details on how to evaluate on cityscapes are in the readme.
Let me know if you have any other questions!
Hello! I also have the same problem with the MAX and MIN depth. Actually, I wonder:
1.The unit of the num in the ".npy" file. Are they in meter?
2. Why always use 80m as the MAX depth? I checked the MAX value of your cityscape gt_depth is 473.5748014025198
And the MAX value of Kitti dataset gt_depth is about 82. So I think the "80" is ok for Kitti but not adapted to Cityscape.
from manydepth.
Hi! Thanks for your reminding! I wonder why Min_depth and Max_depth are still 1e-3 and 80 during evaluating Cityscapes dataset. It seems the Max_depth is larger in Cityscapes. Looking forward to your reply.
Hi @ZhuYingJessica - details on how to evaluate on cityscapes are in the readme.
Let me know if you have any other questions!Hello! I also have the same problem with the MAX and MIN depth. Actually, I wonder: 1.The unit of the num in the ".npy" file. Are they in meter? 2. Why always use 80m as the MAX depth? I checked the MAX value of your cityscape gt_depth is 473.5748014025198 And the MAX value of Kitti dataset gt_depth is about 82. So I think the "80" is ok for Kitti but not adapted to Cityscape.
I guess this is because the depth predicted by the network is inaccurate beyond 80, so only depths within 80 are calculated.
from manydepth.
Hi, it's a great work!
I follow the instruction to train and evaluate on cithscapes, while got following result which is slightly different from the paper.
Hence, how can I achieve the SOTA?
Hello, have you reproduced Cityscapes? How was its data processed during training? I followed README.md to perform Cityscapes preprocessing and obtained image. jpg and camera parameter. txt files. However, during training, the output was all black (the input. jpg was a 24 bit black image). Did my data not process correctly?
from manydepth.
Related Issues (20)
- Stereo + Tempora (training with stereo video like monodepth2)
- About cityscapes disparity to depth_gt HOT 2
- Can the scale ambiguity be resolved with multi-frame approach? HOT 1
- Depth Estimation Results on Single Frames
- Question of relative pose in matching augmentation.
- how to test many frames at the same time?
- Help training out bright reflection... HOT 1
- About scale problem in monocular setting HOT 2
- Why the training time of manydepth is much shorter than monodepth2? HOT 3
- when i test the image,the result of the depth is black HOT 1
- About MAX gt_depth HOT 2
- Get gt depth for other Cityscape images HOT 2
- Calculate the gt depth for Cityscape images HOT 3
- About qualitative results presented in Figure 4 HOT 1
- The dataset GT-Calculation HOT 1
- what should the
- How to conduct multi GPU training
- On the pre training model of cityscapes HOT 3
- custom dataset prepare
- test-time refinement code
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from manydepth.