- [Coming soon] Pretrained checkpoints for NYUv2 and KITTI datasets.
- [March 2024] Training and Inference code released!
- [Feb 2024] ECoDepth accepted in CVPR'2024.
git clone https://github.com/Aradhye2002/EcoDepth
cd EcoDepth
conda env create -f env.yml
conda activate ecodepth
You can see the dataset preparation guide for NYUv2 and KITTI from here. After Update the paths in the desired bash scripts for evaluation and training accordingly.
Please download the pretrained weights form this link and save .ckpt
inside <repo root>/depth/checkpoints
directory.
To evaluate our performance on NYUv2 and KITTI datasets, use test.py
file. The trained models are publicly available, download the models using above links.
-
Train on NYUv2 dataset:
bash test_nyu.sh <path_to_saved_model_of_NYU>
-
Train on KITTI dataset:
bash test_kitti.sh <path_to_saved_model_of_KITTI>
We trained our models on 32 batch size using 8xNVIDIA A100 GPUs. Please set the NPROC_PER_NODE
variable and --batch_size
argument to set the batch size. We set them as NPROC_PER_NODE=8
and --batch_size=4
. So our effective batch_size is 32.
-
Evaluate on NYUv2 dataset:
bash train_nyu.sh
-
Evaluate on KITTI dataset:
bash train_kitti.sh
If you have any questions about our code or paper, kindly raise an issue on Github.
We thank Kartik Anand for assistance with the experiments. Our source code is inspired from VPD and PixelFormer, we thank their authors for publicly releasing the code.
If you find our work useful in your research, please consider citing the following:
TODO
.
.
.
.