Giter Club home page Giter Club logo

md4all's Introduction

md4all - ICCV 2023

Source code for the paper: Robust Monocular Depth Estimation under Challenging Conditions

Authors: Stefano Gasperini*, Nils Morbitzer*, HyunJun Jung, Nassir Navab, and Federico Tombari

*equal contribution

[Project Page] [ICCV Paper (CVF)] [arXiv] [Translated Images (Google Form)]

md4all is a simple and effective solution that works reliably under both adverse and ideal conditions and for different types of learning supervision. We achieve this by exploiting the already proven efficacy of existing architectures and losses under perfect settings. Therefore, we strive to provide valid training signals independently of what is given as input.

Please get in touch with Stefano Gasperini ([email protected]) or Nils Morbitzer ([email protected]) if you have any questions!

This repository provides the PyTorch implementation for our self-supervised md4all model based on Monodepth2. Soon, we will add the code for the fully-supervised version based on AdaBins.



License

Soon we will be able to provide commercial licenses. Please reach out to us if you are interested.

In the meantime, this repository comes for non-commercial use with a CC BY-NC-SA 4.0 (Creative Commons) license.



Installation

This code was developed with Python3 (python3) with Cuda 11.3. All models were trained on a single NVIDIA RTX 3090 (or RTX 4090) GPU with 24GB of memory.

Installation steps:

  • We recommend using Docker to set up your environment for better reproducibility.

    1. To make it as easy as possible, we provide a Makefile that needs to be changed at three locations:
    • l. 26: Change <USER_ID>:<GROUP_ID> to your user and group id.
    • l. 33: Change <PATH_TO_DATAROOT> to your host path of the data folder.
    • l. 34: Change <PATH_TO_MD4ALL> to your host path of the md4all code folder. Then you can run the commands below.
    1. Change the directory:
      cd <PATH_TO_MD4ALL>
    2. Build the docker image:
      make docker-build NAME=build
  • If you do not want to use Docker, here are the installation steps with Anaconda and pip:

    1. Create a conda environment:
      conda create -n md4all python=<PYTHON_VERSION>
    2. Activate the environment:
      conda activate md4all
    3. Change the directory:
      cd <PATH_TO_MD4ALL>
    4. Install the requirements:
      pip install -r requirements.txt
      Or with specific package versions:
      pip install -r requirements_w_version.txt


Datasets:

nuScenes:

  1. Download the nuScenes trainval dataset (v1.0) i.e. the 10 file blobs and the metadata from here (nuScenes). Optionally, you can also download the nuScenes test set from the same location.

  2. Download the translated images and the 'train_samples_dynamic.json' file from here (our Google Form).

  3. Set everything up such that your file structure looks similar to:

nuScenes file tree
nuScenes file tree

RobotCar:

  1. Download the recorded data of the left stereo camera and the front LMS laser sensor for the following scenes from here (RobotCar website):

    • 2014/12/09 for day
    • 2014/12/16 for night
  2. Download the translated images, the computed poses, and the split files from here (our Google Form). The link takes to the same Google Form as for nuScenes, so if you already filled it up for nuScenes, no need to fill it up again for RobotCar as the download link is the same.

  3. Download the RobotCar SDK from here (GitHub repo). The repository contains the extrinsics files.

  4. Set everything up such that your file structure looks similar to:

RobotCar file tree
RobotCar file tree
  1. Undistort and demoisaic the images from the left stereo camera (Attention: Using those commands will replace the original distorted and mosaiced images of the left stereo camera):

    • Docker:
      make docker-precompute-rgb-images-robotcar NAME=precompute-rgb-images-robotcar
    • Conda:
      python data/robotcar/precompute_rgb_images.py --dataroot <PATH_TO_DATAROOT> --scenes 2014-12-09-13-21-02 2014-12-16-18-44-24 --camera_sensor stereo/left --out_dir <PATH_TO_DATAROOT>
  2. Precompute the ground truth depth data by projecting the point cloud of the lms front sensor to the images:

    • Docker:
      make docker-precompute-pointcloud-robotcar NAME=precompute-pointcloud-robotcar
    • Conda:
      python data/robotcar/precompute_depth_gt.py --dataroot <PATH_TO_DATAROOT> --scenes 2014-12-09-13-21-02 2014-12-16-18-44-24 --mode val test


Evaluation and Pre-Trained Models

We provide pre-trained models, namely md4allDD, the baseline used for knowledge distillation (for nuScenes not the same baseline as reported in Table 1 originating from an older code base version), and md2 for both nuScenes and RobotCar here (Google Drive). Download the files to the checkpoints folder. To evaluate the pre-trained models (associated with their respective .yaml config files), run the following commands:

  • nuScenes:

    • Docker:
      make docker-eval-md4allDDa-80m-nuscenes-val NAME=eval-md4allDDa-80m-nuscenes-val
    • Conda:
      python evaluation/evaluate_depth.py --config <PATH_TO_MD4ALL>/config/eval_md4allDDa_80m_nuscenes_val.yaml
  • RobotCar:

    • Docker:
      make docker-eval-md4allDDa-50m-robotcar-test NAME=eval-md4allDDa-50m-robotcar-test
    • Conda:
      python evaluation/evaluate_depth.py --config <PATH_TO_MD4ALL>/config/eval_md4allDDa_50m_robotcar_test.yaml

The provided models and configuration files lead to the results of the tables in our paper.



Training

To train a model e.g., the baseline (associated with its .yaml config file), run:

  • nuScenes:

    • Docker:
      make docker-train-baseline-nuscenes NAME=train-baseline-nuscenes
    • Conda:
      python train.py --config <PATH_TO_MD4ALL>/config/train_baseline_nuscenes.yaml
  • RobotCar:

    • Docker:
      make docker-train-baseline-robotcar NAME=train-baseline-robotcar
    • Conda:
      python train.py --config <PATH_TO_MD4ALL>/config/train_baseline_robotcar.yaml


Prediction on custom images

To predict the depth for custom images, you can use one of the commands below. Please remember that our models were trained on a single dataset so we provide no performance guarantees on the transfer to out-of-distribution data. This script is meant for simplifying quick tests.

  • nuScenes (using model trained on nuScenes):

    • Docker (For Docker, you need to adapt the image path and output path written in the Makefile to customize the behavior of test_simple.py):
      make docker-test-simple-md4allDDa-nuscenes NAME=test-simple-md4allDDa-nuscenes
    • Conda:
      python test_simple.py --config <PATH_TO_MD4ALL>/config/test_simple_md4allDDa_nuscenes.yaml --image_path <PATH_TO_MD4ALL>/resources/n015-2018-11-21-19-21-35+0800__CAM_FRONT__1542799608112460.jpg --output_path <PATH_TO_MD4ALL>/output
  • RobotCar (using model trained on RobotCar):

    • Docker (For Docker, you need to adapt the image path and output path written in the Makefile to customize the behavior of test_simple.py):
      make docker-test-simple-md4allDDa-robotcar NAME=test-simple-md4allDDa-robotcar
    • Conda:
      python test_simple.py --config <PATH_TO_MD4ALL>/config/test_simple_md4allDDa_robotcar.yaml --image_path <PATH_TO_MD4ALL>/resources/1418756721422679.png --output_path <PATH_TO_MD4ALL>/output


Translation on custom images

We provide the pre-trained ForkGAN models for both nuScenes (day-to-night, day-to-rain) and RobotCar (day-to-night) here (Google Drive). Download the ForkGAN folders (e.g. forkgan_nuscenes_day_night) to the checkpoints folder. To predict the translation for custom images, you can use one of the commands below. Please remember that our models were trained on a single dataset so we provide no performance guarantees on the transfer to out-of-distribution data. This script is meant for simplifying quick tests.

  • nuScenes day-to-night translations (using forkgan model trained on nuScenes day and night images):

    • Docker (For Docker, you need to adapt the image path, checkpoint directory, and output path written in the Makefile to customize the behavior of translate_simple.py):
      make docker-translate-simple-md4allDDa-nuscenes-day-night NAME=translate-simple-md4allDDa-nuscenes-day-night
    • Conda:
      python translate_simple.py --image_path <PATH_TO_MD4ALL>/resources/n008-2018-07-26-12-13-50-0400__CAM_FRONT__1532621809112404.jpg --checkpoint_dir <PATH_TO_MD4ALL>/checkpoints/forkgan_nuscenes_day_night --model_name forkgan_nuscenes_day_night --resize_height 320 --resize_width 576 --output_dir <PATH_TO_MD4ALL>/output
  • nuScenes day-to-rain translations (using forkgan model trained on nuScenes clear and rainy day images):

    • Docker (For Docker, you need to adapt the image path, checkpoint directory, and output path written in the Makefile to customize the behavior of translate_simple.py):
      make docker-translate-simple-md4allDDa-nuscenes-day-rain NAME=translate-simple-md4allDDa-nuscenes-day-rain
    • Conda:
      python translate_simple.py --image_path <PATH_TO_MD4ALL>/resources/n008-2018-07-26-12-13-50-0400__CAM_FRONT__1532621809112404.jpg --checkpoint_dir <PATH_TO_MD4ALL>/checkpoints/forkgan_nuscenes_day_rain --model_name forkgan_nuscenes_day_rain --resize_height 320 --resize_width 576 --output_dir <PATH_TO_MD4ALL>/output
  • RobotCar day-to-night translations (using forkgan model trained on RobotCar day and night images):

    • Docker (For Docker, you need to adapt the image path, checkpoint directory and output path written in the Makefile to customize the behavior of translate_simple.py):
      make docker-translate-simple-md4allDDa-robotcar-day-night NAME=translate-simple-md4allDDa-robotcar-day-night
    • Conda:
      python translate_simple.py --image_path <PATH_TO_MD4ALL>/resources/1418132504537582.png --checkpoint_dir <PATH_TO_MD4ALL>/checkpoints/forkgan_robotcar_day_night --model_name forkgan_robotcar_day_night --crop_height 768 --crop_width 1280 --resize_height 320 --resize_width 544 --output_dir <PATH_TO_MD4ALL>/output


FAQ

  • Permission Denied error when running docker without sudo: To resolve the problem follow the steps here (Docker docs).
  • ModuleNotFoundError: no module named Python Error => Make sure to update your PYTHONPATH accordingly:
    • Docker:
      export PYTHONPATH="${PYTHONPATH}:/mnt/code/md4all"
    • Conda:
      export PYTHONPATH="${PYTHONPATH}:/path/to/md4all"
  • FileNotFoundError: [Errno 2] No such file or directory: '<PATH_TO_RESOURCE>' => If you use Conda you have to adapt the paths to the model checkpoint, dataset etc. according to your file system (as they are configured for Docker).


BibTeX

If you find our code useful for your research, please cite:

@inproceedings{gasperini_morbitzer2023md4all,
  title={Robust Monocular Depth Estimation under Challenging Conditions},
  author={Gasperini, Stefano and Morbitzer, Nils and Jung, HyunJun and Navab, Nassir and Tombari, Federico},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2023},
  pages={8177-8186}
}


Acknoledgements

Our implementation is based on the PackNet-SfM repository (GitHub) and follows their code structure. It also incorporates parts of the Monodepth2 repository (GitHub).

To perform day-to-adverse image translations, we used a PyTorch implementation of ForkGAN (GitHub) (original implementation can be found here (GitHub)).

We want to thank the authors for their great contribution! :)

md4all's People

Contributors

morbi25 avatar sgasperini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

md4all's Issues

Question of generateing the adverse image

Hello, thankyou for your meaningful work and I encontered some problems when generateing the adverse images. I only want to generate adverse image only from the day_clear images, but in nuscenes datasets there are some real night and rain images which are unexpect in the genearting process. So how can I exclude them from the genearting process? I noticed that there are weather tokens for different scenes, but I fail to build the correlation between the weather condition to every input images.

Consulting for baseline_teacher_nuscenes.ckpt

Dear author,

When I want to retrain md4allDDa_nuscenes, I find that the configuration file requires baseline_teacher_nuscenes.ckpt. However, when going through the pretrained models inside google drive profiles, I can not find it. Can you help me?

Thx!

I am so sorry to bother you again,when i use pip to install the package ,i solve the previous problems.

          I am so sorry to bother you again,when i use pip to install the package ,i solve the previous problems.

But when i run the code on my own env 'python data/robotcar/precompute_depth_gt.py --dataroot D:\md4all-main\robotcar --scenes 2014-12-16-18-44-24 --mode test' ,the error is
'OSError: Could not find scan files for given time range in directory D:\md4all-main\robotcar\2014-12-16-18-44-24\lms_front'
I have download 2014-12-16-18-44-24_lms_front01 02 03 04 05 06 and put them in the file structure as you show in the github.I have prepare the file structure like you show .Because I didn't have enough memory to download all the stereo left images, I only downloaded a portion of the left images.Is the cause of the above error due to the absence of the precompute_pose_gt.py run file? I really lack the knowledge to generate GT from this dataset. Hope to get your reply to solve this problem.
This is my file structure:
123

Originally posted by @Huskie377 in #10 (comment)

Reproduce

Hello author, I want to consult the use of the code, I have a few questions

  1. Training process: Is it that I should train Baseline first (loading trains_baseline_nuscenes.yaml), and then train MD4Alldd (loading train_md4Alldda_nuscenes.yaml). I think the training is in two stages.
  2. We reproduce code according to our understanding, but the result is not very good. I provide two files, one is the weight provided by the author. Load EVAL_MD4ALLDDA_80M_nuscenes_val.yaml
  3. For the poor reproduce results, do we think it is the question of our training set? The training process shows "Number of Samples in Train-Set: 30258". This is inconsistent with the 15129 in the paper.
    4.val process show \ 'Train-Day-CLEAR ': 491, \ 'Train-Day -rain ': 0, \ 'Train-Night-CLEAR ': 0, \ 'Train-Night-Rain ': 0, \ ' Val-day-service \ ': 111, ' value -rain \ ': 24, ' value-service \ ': 12, ' value-rain \ ': 3} ",I don't understand this show.

We look forward to your reply, thank you very much
eval_use_autor_chechpoint.csv
eval-md4allDDa-80m-nuscenes-val_result_metrics_12October2023at05_39_59CST.csv

for evaluation

i have problem about the below code
python evaluation/evaluate_depth.py --config <PATH_TO_MD4ALL>/config/eval_md4allDDa_50m_robotcar_test.yaml
how can i get the GT to evaluation? the Datasets->RobotCar in step 5 has shown the way ,but i fail to get the GT,it was complex.
I want the GT to run the evaluate_depth.py,could you share the link that i can download the GT? Thankyou !

Some questions about training

1、 In the evaluation, what is the difference between “eval-md4allDDa-wo-daytime-norm-80m-nuscenes-val”、“eval-md4allDDa-wo-daytime-norm-80m-nuscenes-test”、“eval_baseline_80m_nuscenes_val”、“eval_md4allDDa_80m_nuscenes_val”?
Is the difference between“eval-md4allDDa-wo-daytime-norm-80m-nuscenes-val” and “eval-md4allDDa-wo-daytime-norm-80m-nuscenes-test” evaluated using both the val dataset and the test dataset. Which is the more authoritative or universal evaluation method?
What is the difference between “eval_md4allDDa_80m_nuscenes_val” and “eval-md4allDDa-wo-daytime-norm-80m-nuscenes-val”? I don't quite understand the meaning of "eval md4allDDa wo daytime norm 80m popularity val"?
Does "eval_baseline_80m_musenes_val" mean only evaluating the baseline model separately (monodepth2+weak speed supervision)?
2、I used the pre trained model you provided but couldn't obtain the results in Table 1 of the paper. May I ask if the pre trained model you provided is not the one in the paper?
image
image
image
What does a1 a1_gt a1_pp a1_pp_gt a2 a2_gt a2_pp a2_pp_gt a3 a3_gt a3_pp a3_pp_gt abs_rel abs_rel_gt abs_rel_pp abs_rel_pp_gt count rmse rmse_gt rmse_log rmse_log_gt rmse_log_pp rmse_log_pp_gt rmse_pp rmse_pp_gt sq_rel sq_rel_gt sq_rel_pp sq_rel_pp_gt mean in the evaluation indicators, and why are there expressions like "_gt", "_pp", and "pp_gt"?
3、Is my command below correct?
Train the sup=Mv, md4all-DD command: python train.py --config <PATH_TO_MD4ALL>/config/train_md4allDDa_nuscenes.yaml
Evaluate the sup=Mv, md4all-DD command:
“python evaluation/evaluate_depth.py --config <PATH_TO_MD4ALL>/config/eval_md4allDDa_80m_nuscenes_val.yaml”
or “python evaluation/evaluate_depth.py --config <PATH_TO_MD4ALL>/config/eval-md4allDDa-wo-daytime-norm-80m-nuscenes-test”

Code for DENSE datase

Thanks for your amazing work in adverse condition MDE!
I found you used DENSE dataset for fog and snow but don't give any code for it .Can you please share it with us to evaluate the models robustness on real snow scenes?

Download problem

The registers on the RobotCar website don't work. Would you mind to share you RobotCar?

question

Hello, can this model be used for other pictures? Why can't I get good results when using my own pictures, even if the pictures are similar to the test pictures, but there is no effect. In addition, we found that the final effect drawing will appear on the top left of the whole picture, may I ask why this is, is it a configuration problem?
md4all_question.docx

md4all-AD model

Sorry to bother you again! I want to implement and test your md4all-AD model on RobotCar, but in the folder config there is only train_baseline_robotcar.yaml, and in the paper it is said that the baseline is set under the situation that the translation ratio x=0. So what changes should be applied to the code if I want to implement the model md4all-AD? Thanks a lot for your reply!

code and paper

Recently, we took a hard look at the md4all paper and code. During the training of md4all-DD (distillation step), we found a mix of easy and translated inputs in the paper.
However, it is shown in the code that the training set includes scenes with visibility/weather conditions: ['day-clear'] Weather distribution in train and val scenes: {'train-day-clear': 491, 'train-day-rain': 0, 'train-night-clear': 0, 'train-night-rain': 0, 'val-day-clear': 111, 'val-day-rain': 24, 'val-night-clear': 12, 'val-night-rain': 3}, why this print?
We looked at the code carefully, and the key code should be data.transforms included "class DaytimeTranslation".
We think that training md4all-DD needs to use “day-rain”, “night-clear”, and “night-rain”. At the same time, md4all-DD has been reproduced based on the baseline provided by the authors and our own training baseline.
We want to understand where there is a mix of easy and translated inputs in the code, and why the above print is generated.
Looking forward to your reply, thank you very much!
123

@sgasperini @morbi25

Where to get the txts in extrinsics?

How can I get the files in the folder robotcar/extrinsics? I thought that I could get them from the RobotCar Dataset but I didn't find out the location of these files. Can you point out where I can find or download these files so that I can do the step 5 in Dataset->RobotCar. Thanks a lot!

Questions about the Oxford RobotCar Dataset

Hi, thanks for your awesome.

About the Oxford RobotCar Dataset, I have some questions:
(1) The point cloud you use "lms_front" as the sensor, which is a 2D lidar data. Why don't choose the "ldmrs" sensor, which is a 3D lidar data?
(2) The poses you use the "interpolate_vo_poses" to interpolate the VO poses, which get the relative pose. Why you don't use the "interpolate_ins_poses" to get the absolute pose?

I appreciate any help you can provide!

Question about the results

Dear authors:
Thanks for your work! These days, I try to run the code you provided in this repository. I used the baseline_nuscenes.ckpt and md2_nuscnes.ckpt as the teacher net to train the model on the NuScenes-v1.0 dataset respectively. However, the results is different from the results you offered in table 1. The results of baseline_nuscenes.ckpt deviate somewhat from table1, while the results of md2_nuscenes.ckpt are very poor if I don't align them with ground truth. When align the results of md2_nuscenes.ckpt, the results still worse than the results in table1.
I also use the evaluate_depth.py to evaluate the trained model you offered called md4allDDa_nuscenes.ckpt. The results of this model is close to the data in Table 1, but still worse than the data in the table.
My environments are as followed: Python 3.8.18, Pytorch 2.1.0, torchvision 0.16.0, numpy 1.24.2, Pillow 10.1, and all the experiments carried on a single RTX 4090.
I wonder if I chose the wrong teacher net? Or maybe the environment leads to a worse results?
I have put the results into the table, please check!Thank you very much!
baseline_nuscenes:
baseline_nuscenes
md2_nuscenes:
md2_nuscenes
md4allDDa_nuscenes:
md4allDDa_nuscenes

test

    pointcloud_dir = os.path.join(dataroot, scene, "lms_front_synchronized/vo/time_margin=+-4.0e+06", timestamp)
    pointcloud_path = f"{pointcloud_dir}.pcd.bin"       in robotcar_dataset.py

Hello, I can't find a way to download this file when doing the test, can you help? Thank you!

Inference code for arbitrary images

Hi, and thank you for your excellent work. Currently, there is no script to apply this model to arbitrary images (frames). Do you have any plans to provide one? Thanks, and congrats for acceptance!

Questions w.r.t the code and method

Hi, thank you for sharing your code. Your work is very interesting and impressive. I have three questions about the code and the method that I would like to ask you:

  1. In your paper, you proposed using percentage x to blend input images under different weather conditions. Could you please point me to the code that implements this part? I did not find it in the repository.
  2. You use NormalizeDynamic’s daytime mode in DD stage to normalize the input. This normalization requires input of the corresponding weather conditions (e.g. day or night). Does this mean that correct weather is also required during evaluation?
  3. You used four modes (‘’, ‘_pp’, ‘_gt’, ‘_pp_gt’) in the evaluator to evaluate depth estimation. What is the post-process represented by pp used for? Which mode did you use for the results you reported in your paper?

I would appreciate it if you could answer my questions or provide some guidance. Thank you for your time and attention. I look forward to hearing from you soon.

Nuscenes trainset problem

Hello authors, we are very interested in your work and have some questions to ask for your help.

  1. When building the training set, we found that the work sampled 27482 images through train_sample_dynamic.json, and then selected its subset of 15129 images. What is the purpose of building the training set process?
  2. In addition, why not select all images of CAM_FRONT to train the md4all model?
  3. Because we want to convert your trainset(Nuscenes) into VOC format, we found that you did not select all images of CAM_FRONT, Instead of the subset.
    Looking forward to the author's reply, thank you very much, express our thanks to you! @sgasperini @morbi25

About training time

May I ask, under what configuration, how long does the training time take for your code? How long do I need to train if I have a 3090RTX?

md4all-main/models/md2/depth_decoder.py

In depth decoder. py, I found that during initialization, you set scale=range (4) and implemented
"for s in self. scales:
Self. convs [("dispconv", s)]=Conv3x3 (self. num_ch_dec [s], self. num_output_channels)"
But in def forward(), i loops 5 times. Why is this?
Scales contain four scales, namely 0, 1, 2, and 3. So in def forward(), when i=4. Does this mean that the decoder is restoring the original image size?

Questions about the precompute_pose_gt.py

Sorry to bother you again, and again I want to thank you for helping me with the previous problems about the velocity loss. Now I'm trying to figure out how you pre-compute the poses for RobotCar dataset, and there are some codes confusing me.In precompute_pose_gt.py, in line 48:timestamps.append({-1: int(ts_split[0]), 0: int(ts_split[1]), 1: int(ts_split[2])}), and in line 38 the return is:{"timestamp": timestamp[0], "prev": timestamp[-1], "next": timestamp[0], "pose_to_prev": pose0m1.tolist(), "pose_to_next": pose0p1.tolist()}, I want to know why the key "next" in line 38 is timestamp[0] but not timstamp[1]

Questions about the experiments

Dear authors:
Thank you for your work again ! I want to ask some questions about the experiments.
First, I would like to know when your team was training forkgan to generate images, did you encounter the problem that the quality of the generated images was low, and the distribution of generated images was greatly different with the nuscenes night scene images, which in turn led to the poor performance of md4all?
Second, I'm wondering if your team's metrics for night shook up badly when you were training md4all? When I was training md4all using my own gan generated images, I found that the nightscape metrics oscillated badly and did not converge.
I would be very grateful if you could reply to me. Thank you for reading my question in your busy schedule !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.