Giter Club home page Giter Club logo

realbasicvsr's People

Contributors

ak391 avatar ckkelvinchan avatar ha0tang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realbasicvsr's Issues

about REDS dataset

Hi,i want to train X2 model,but i didn't find a way to download the REDS dataset.Do you have a link to download this dataset?thanks.

loss for training

Hi, thanks for your wonderful work. In the paper, you use cb loss, but in the codes, you use L1 loss in the config file. Which one is the right. Have you ever tried to modify the model for x1? Looking forward for your return. Thank you.

load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth Killed

python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demp2.mp4 results/demo_001.mp4 --fps=59
2022-06-02 13:27:00,288 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Killed

why?

it even not use my memory and cpu ,gpu

I run it in wsl2 which support cuda

Fine details of images got destroyed!

Hi Kelvin,

Overall, most of the processed images look great excepting all fine details of images get completely destroyed (see attached images). Are there any ways to combat this issue using the current RealBasicVSR_x4.pth pretrained model or perharps if you could help updating the pretrained model? Thanks!

JapGarden140-Compared

London70-Compared

Getting some errors with the inference

Hi there,thanks for the work.But getting some errors....

packages in environment at conda\envs\realbasicvsr:

Name Version Build Channel

absl-py 1.0.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2021.10.26 haa95532_2
cachetools 4.2.4 pypi_0 pypi
certifi 2021.10.8 py37haa95532_0
charset-normalizer 2.0.10 pypi_0 pypi
click 7.1.2 pypi_0 pypi
colorama 0.4.4 pypi_0 pypi
cudatoolkit 10.1.243 h74a9793_0
freetype 2.10.4 hd328e21_0
google-auth 2.3.3 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.43.0 pypi_0 pypi
idna 3.3 pypi_0 pypi
imageio 2.13.5 pypi_0 pypi
importlib-metadata 4.10.0 pypi_0 pypi
intel-openmp 2021.4.0 haa95532_3556
jpeg 9b hb83a4c4_2
libpng 1.6.37 h2a8f88b_0
libtiff 4.2.0 hd0e1b90_0
libuv 1.40.0 he774522_0
libwebp 1.2.0 h2bbff1b_0
lmdb 1.3.0 pypi_0 pypi
lz4-c 1.9.3 h2bbff1b_1
markdown 3.3.6 pypi_0 pypi
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py37h2bbff1b_0
mkl_fft 1.3.1 py37h277e83a_0
mkl_random 1.2.2 py37hf11a4ad_0
mmcv-full 1.4.2 pypi_0 pypi
mmedit 0.12.0 pypi_0 pypi
model-index 0.1.11 pypi_0 pypi
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 py37h559b2a2_3
numpy 1.21.2 py37hfca59bb_0
numpy-base 1.21.2 py37h0829f74_0
oauthlib 3.1.1 pypi_0 pypi
olefile 0.46 py37_0
opencv-python-headless 4.5.4.60 pypi_0 pypi
openmim 0.1.5 pypi_0 pypi
openssl 1.1.1l h2bbff1b_0
ordered-set 4.0.2 pypi_0 pypi
packaging 21.3 pypi_0 pypi
pandas 1.3.5 pypi_0 pypi
pillow 8.4.0 py37hd45dc43_0
pip 21.2.4 py37haa95532_0
protobuf 3.19.1 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 3.0.6 pypi_0 pypi
python 3.7.11 h6244533_0
python-dateutil 2.8.2 pypi_0 pypi
pytorch 1.7.1 py3.7_cuda101_cudnn7_0 pytorch
pytz 2021.3 pypi_0 pypi
pywavelets 1.2.0 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
regex 2021.11.10 pypi_0 pypi
requests 2.27.1 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.8 pypi_0 pypi
scikit-image 0.19.1 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
setuptools 58.0.4 py37haa95532_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.37.0 h2bbff1b_0
tabulate 0.8.9 pypi_0 pypi
tensorboard 2.7.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h2bbff1b_0
torchaudio 0.7.2 py37 pytorch
torchvision 0.8.2 py37_cu101 pytorch
typing_extensions 3.10.0.2 pyh06a4308_0
urllib3 1.26.7 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 2.0.2 pypi_0 pypi
wheel 0.37.0 pyhd3eb1b0_1
wincertstore 0.2 py37haa95532_2
xz 5.2.5 h62dcd97_0
yapf 0.32.0 pypi_0 pypi
zipp 3.7.0 pypi_0 pypi
zlib 1.2.11 h8cc25b3_4
zstd 1.4.9 h19a0ad4_0


For pictures I ran the test code :

(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_000 results/demo_000
2022-01-07 06:36:34,070 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth

it did nothing.

For video I ran the test code setting the --max_seq_len=2 :

(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --max_seq_len=2 --fps=12.5
2022-01-07 06:38:02,236 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 130, in main
cv2.destroyAllWindows()
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1268: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'

it gave this error.


For video I ran the test code default :

(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5
2022-01-07 06:40:14,850 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 117, in main
outputs = model(inputs, test_mode=True)['output'].cpu()
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\runner\fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\srgan.py", line 95, in forward
return self.forward_test(lq, gt, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\real_esrgan.py", line 211, in forward_test
output = _model(lq)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\real_basicvsr_net.py", line 87, in forward
outputs = self.basicvsr(lqs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 126, in forward
flows_forward, flows_backward = self.compute_flow(lrs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 98, in compute_flow
flows_backward = self.spynet(lrs_1, lrs_2).view(n, t - 1, 2, h, w)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 346, in forward
input=self.compute_flow(ref, supp),
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 315, in compute_flow
], 1))
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 420, in forward
return self.basic_module(tensor_input)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\cnn\bricks\conv_module.py", line 201, in forward
x = self.conv(x)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 420, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 4.35 GiB (GPU 0; 11.00 GiB total capacity; 4.37 GiB already allocated; 2.46 GiB free; 7.07 GiB reserved in total by PyTorch)

it gave an oom error.


System is Win 10 64 bit, 1080 ti =11 gb.
model is in the right folder and the environment is done with conda with the given commands in order.

Install dependencies?

During startup, I encountered many errors from mmcv imports (.../site-packages/mmcv/_ext.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN2at6detail10noopDeleteEPv) to #27.
It may be worth adding an alternative dependency installation instructions for NONconda env and CUDA11.1 that will allow you to run the program on a clean environment:

  1. Install torch from https://pytorch.org/get-started/locally/.
    For me: pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
  2. Install mmcv from https://mmcv.readthedocs.io/en/latest/get_started/installation.html
    For me with torch 11.1.0 (for cuda 11.3) and cuda 11.1
    pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html
  3. And at the end: pip install mmedit

Thats work for me and it's up to you to add it. But this issue may help someone. Thanks for the great repository, it helped me a lot!

About cleaning mudule

Hi, thanks for sharing a great job. I am interested in the module about cleaning, may I ask if it's a pre-processing? could you share the code or details? thanks a lot.

model size depends on image size?

I have been trying to upscale 1920X1080 video.
It seems that my gpus can't load the model due to lack of capacity.
I am using 32GB V100 Tesla gpus.
When I tried it with smaller size video, it worked.

Is it right that higher gpu memory is required for bigger size videos?

And, would it be possible to upscale 1920X1080 video with my gpus?

Running the model on mobile

Hey there!

Does the pre-trained model is suitable for mobile from a memory/CPU perspective? Or it's meant to be run on heavier machines. Thanks!

memory exhaust

my pc 32g memory,when run inference, the memory exhaust,then the progress killed.
how many memory needed?

Process after saving the checkpoint

Hi. First of all, thank you very much for your project. The quality is impressive. I'm trying to train a neural network, but after every save checkpoint it starts some long process for 300 iterations. It looks like evaluating, but I couldn't find a value of 300 in the config file. I train on the REDS dataset (24k images) and that process takes longer than the training itself for 10k iterations. What is it? Is there any way to reduce this value (300)? Is it possible to disable it and what is the risk?

Example:
screen

loss is nan

"loss_pix" and "loss_clean" is nan every time I train for a while
I follow the instructions below:
Put the original REDS dataset in ./data
Run the following command:
python crop_sub_images.py --data-root ./data/REDS --scales 4

and training model follow the instructions
mim train mmedit configs/realbasicvsr_wogan_c64b20_2x30x8_lr1e-4_300k_reds.py --gpus 2 --launcher pytorch

GPU memory issue

Hi,

Thanks for sharing this code.

I tried to use my own sample video (mp4), but I've got a GPU memory issue. is there any restriction on the input file format or length?

This is the code I used to test

python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/test.mp4 results/demo_001.mp4 --fps=30 --max_seq_len=20

Can't make inferences with my own image. (not for video)

I am a colab user.
I uploaded my own images to the folder "demo_000" and tried to infer, but it didn't work.
The inference worked for the pre-prepared images.

Here is traceback.
"""

2022-01-13 13:16:36,348 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 97, in main
inputs = torch.stack(inputs, dim=1)
RuntimeError: stack expects each tensor to be equal size, but got [1, 3, 352, 352] at entry 0 and [1, 3, 604, 900] at entry 2
\n# show the first image as an example\nimport mmcv\nimport matplotlib.pyplot as plt\n\nimg_input = mmcv.imread('data/demo_000/00000000.png', channel_order='rgb') \nimg_output = mmcv.imread('results/demo_000/00000000.png', channel_order='rgb') \n\nfig = plt.figure(figsize=(25, 20))\nax1 = fig.add_subplot(2, 1, 1) \nplt.title('Input image', fontsize=16)\nax1.axis('off')\nax2 = fig.add_subplot(2, 1, 2)\nplt.title('RealBasicVSR output', fontsize=16)\nax2.axis('off')\nax1.imshow(img_input)\nax2.imshow(img_output)\n

"""
Note that the current ipynb file does not have a script to generate "results/demo_000".
Please add mkdir codes if you can.

Fail to download dataset in Dropbox

Hi, thanks for the excellent work!
But could you please release the dataset on Google Drive, too? I can't download the dropbox link in China...
Thanks very much!

resources are always insufficient

When training video material, the memory and video memory resources are always insufficient. Is there any parameter to solve this problem? almost 1min .mp4 25fps, 3MB, running on the 12GBRAM 12GvRam, will meet resources lack
or how should I deal with the input video before to more easily run the code

train code

Hello,when will the train code be released?I hope to get the x2 pre-trained model through train.thanks.

About the evaluation

I used the official NIQE code to evaluate the demo_000 and the result, got a unexpected result, as the niqe value of the raw video is 3.9829 while the sr video is 4.3407. I just input every frame and calculate the average value.
I don't know where is wrong, as this result is totally opposite towards that in paper.

Training Code and x2 model

Hi. When would you release the training code? Could you provide a tentative date for releasing the training code? Also, do you have pretrained_weights for the x2 model?

can x2 model run faster

I found process a video with x4 model is very slow, nearly 20min for a 10s video.
So if i train a x2 model, can it process faster?
And how to train a x2 model?

RuntimeError: storage has wrong size: expected 0 got 1728

Exception has occurred: RuntimeError
RealBasicVSR: PerceptualLoss: storage has wrong size: expected 0 got 1728

During handling of the above exception, another exception occurred:

File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\restorers\real_basicvsr.py", line 65, in init
super().init(generator, discriminator, gan_loss, pixel_loss,

During handling of the above exception, another exception occurred:

File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 20, in build
return build_from_cfg(cfg, registry, default_args)
File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 58, in build_model
return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 67, in init_model
model = build_model(config.model, test_cfg=config.test_cfg)
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 81, in main
model = init_model(args.config, args.checkpoint)
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 149, in
main()

'ConfigDict' object has no attribute 'model'

Hi. I am installed that:

conda create -n vsr3 python=3.7 -y
conda activate vsr3
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch -y
conda install -c omnia openmm -y
#conda install -c esri mmcv-full -y
pip install mmcv-full==1.3.17 -f https://download.openmmlab.com/mmcv/dist/11.1/torch1.10.0/index.html
python3 -m pip install mmedit

Then i run:

python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5

Then get an error:

Traceback (most recent call last):
  File "inference_realbasicvsr.py", line 148, in <module>
    main()
  File "inference_realbasicvsr.py", line 80, in main
    model = init_model(args.config, args.checkpoint)
  File "inference_realbasicvsr.py", line 64, in init_model
    config.model.pretrained = None
  File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 507, in __getattr__
    return getattr(self._cfg_dict, name)
  File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 48, in __getattr__
    raise ex
AttributeError: 'ConfigDict' object has no attribute 'model'

Also i check in jupyter notebook the object:

config = mmcv.Config.fromfile(config)

And config contains:

Config (path: configs/realbasicvsr_x4.py): {'argparse': <module 'argparse' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/argparse.py'>, 'os': <module 'os' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/os.py'>, 'osp': <module 'posixpath' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/posixpath.py'>, 'sys': <module 'sys' (built-in)>, 'Pool': <bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f8f1c640d10>>, 'cv2': <module 'cv2' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/cv2/__init__.py'>, 'mmcv': <module 'mmcv' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/__init__.py'>, 'np': <module 'numpy' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/numpy/__init__.py'>, 'worker': <function worker at 0x7f8e83b0a170>, 'extract_subimages': <function extract_subimages at 0x7f8e83b0a200>, 'main_extract_subimages': <function main_extract_subimages at 0x7f8e83b0a440>, 'parse_args': <function parse_args at 0x7f8e83b0a050>}

Missing file (crop_sub_images.py).

Thanks for your great work, but it seems to lack a related file (crop_sub_images.py) in this project for training. Could you upload this file? I would appreciate it.

Issue with training

I have followed the instructions listed in the README and completed the training. However, I ran into a few issues during the training.

  1. During training of RealBasicVSR Net, the model started giving NaN losses after iteration 220000. Attaching the training logs for the same - RealBasicVSR Net.log, RealBasicVSR.log.
  2. I used the model iter_220000 as the initialization for RealBasicVSR GAN model. After model training was completed, the images generated during inference seem to be of very poor quality. I have included a few sample generated images from the VideoLQ dataset. The same is reproducible even on images from the Vid4 dataset.

No changes have been made to the source code. The code commit ID used is fa3d328 from Jan 17, 2022.

Could you please let me know how this can be resolved? Also please let me know if any more information is required from my side in order to debug this.

00000000
00000000
00000000

[Question] Weight for the losses

Hi @ckkelvinchan, I was wondering how did you choose the weights for the losses of the second training of RealBasicVSR. I mean how did you choose $\lambda_{per}$ and $\lambda_{adv}$ mentioned in the paper "Investigating Tradeoffs in Real-World Video Super-Resolution"?
$\lambda_{per} L_{per}$ is almost 2 order of magnitude higher than all other in your log

About Table 2 in the paper, I wonder how much effect does the clean module and clean loss have towards the result?

The author gives the perceptual effect with different clean module and loss in figure 4. But these are not the quantitative results correspondingly. I mean, in Table 2, BasicVSR++ item shows the results training with bicubic. How about the results of BasicVSR++ training with the degradation data produced by "second-order order degradation model"? And why BasicVSR++ is used as the reference but not the BasicVSR?

Running the image model will cause the memory to kill. How to solve it?

Running the image model will cause the memory to kill. How to solve it? I have converted the video to a picture ,64gb memory still can be adjusted in the kill configuration?64gb memory still can be adjusted in the kill configuration? And the 24gb video memory will also report insufficient, how to solve it? much needed, thank you

model training problem

When trying to train on multiple GPUs the error:

ValueError: You may use too small dataset and our distributed sampler cannot pad your dataset correctly. We highly recommend you to use fewer GPUs to finish your work

I follow the instructions below:
Put the original REDS dataset in ./data
Run the following command:
python crop_sub_images.py --data-root ./data/REDS --scales 4

and training model follow the instructions
mim train mmedit configs/realbasicvsr_wogan_c64b20_2x30x8_lr1e-4_300k_reds.py --gpus 2 --launcher pytorch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.