ckkelvinchan / realbasicvsr Goto Github PK
View Code? Open in Web Editor NEWOfficial repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"
License: Apache License 2.0
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"
License: Apache License 2.0
I use this command will meet error, while
pip install mmcv-full
can run
Hi,i want to train X2 model,but i didn't find a way to download the REDS dataset.Do you have a link to download this dataset?thanks.
Hi, thanks for your wonderful work. In the paper, you use cb loss, but in the codes, you use L1 loss in the config file. Which one is the right. Have you ever tried to modify the model for x1? Looking forward for your return. Thank you.
I know "realbasicvsr_x4.py" can enlarge the picture 4 times, but it seems impossible to set a fixed width and height when I looked into "realbasicvsr_x4.py". So what should I do if I want to specify the output width and height? Thanks
python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demp2.mp4 results/demo_001.mp4 --fps=59
2022-06-02 13:27:00,288 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Killed
why?
it even not use my memory and cpu ,gpu
I run it in wsl2 which support cuda
Hi Kelvin,
Overall, most of the processed images look great excepting all fine details of images get completely destroyed (see attached images). Are there any ways to combat this issue using the current RealBasicVSR_x4.pth pretrained model or perharps if you could help updating the pretrained model? Thanks!
I report an error when I run the following command:ImportError: DLL load failed: 找不到指定的程序,show to solve it ,thank you
Hi there,thanks for the work.But getting some errors....
absl-py 1.0.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2021.10.26 haa95532_2
cachetools 4.2.4 pypi_0 pypi
certifi 2021.10.8 py37haa95532_0
charset-normalizer 2.0.10 pypi_0 pypi
click 7.1.2 pypi_0 pypi
colorama 0.4.4 pypi_0 pypi
cudatoolkit 10.1.243 h74a9793_0
freetype 2.10.4 hd328e21_0
google-auth 2.3.3 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.43.0 pypi_0 pypi
idna 3.3 pypi_0 pypi
imageio 2.13.5 pypi_0 pypi
importlib-metadata 4.10.0 pypi_0 pypi
intel-openmp 2021.4.0 haa95532_3556
jpeg 9b hb83a4c4_2
libpng 1.6.37 h2a8f88b_0
libtiff 4.2.0 hd0e1b90_0
libuv 1.40.0 he774522_0
libwebp 1.2.0 h2bbff1b_0
lmdb 1.3.0 pypi_0 pypi
lz4-c 1.9.3 h2bbff1b_1
markdown 3.3.6 pypi_0 pypi
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py37h2bbff1b_0
mkl_fft 1.3.1 py37h277e83a_0
mkl_random 1.2.2 py37hf11a4ad_0
mmcv-full 1.4.2 pypi_0 pypi
mmedit 0.12.0 pypi_0 pypi
model-index 0.1.11 pypi_0 pypi
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 py37h559b2a2_3
numpy 1.21.2 py37hfca59bb_0
numpy-base 1.21.2 py37h0829f74_0
oauthlib 3.1.1 pypi_0 pypi
olefile 0.46 py37_0
opencv-python-headless 4.5.4.60 pypi_0 pypi
openmim 0.1.5 pypi_0 pypi
openssl 1.1.1l h2bbff1b_0
ordered-set 4.0.2 pypi_0 pypi
packaging 21.3 pypi_0 pypi
pandas 1.3.5 pypi_0 pypi
pillow 8.4.0 py37hd45dc43_0
pip 21.2.4 py37haa95532_0
protobuf 3.19.1 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 3.0.6 pypi_0 pypi
python 3.7.11 h6244533_0
python-dateutil 2.8.2 pypi_0 pypi
pytorch 1.7.1 py3.7_cuda101_cudnn7_0 pytorch
pytz 2021.3 pypi_0 pypi
pywavelets 1.2.0 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
regex 2021.11.10 pypi_0 pypi
requests 2.27.1 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.8 pypi_0 pypi
scikit-image 0.19.1 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
setuptools 58.0.4 py37haa95532_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.37.0 h2bbff1b_0
tabulate 0.8.9 pypi_0 pypi
tensorboard 2.7.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h2bbff1b_0
torchaudio 0.7.2 py37 pytorch
torchvision 0.8.2 py37_cu101 pytorch
typing_extensions 3.10.0.2 pyh06a4308_0
urllib3 1.26.7 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 2.0.2 pypi_0 pypi
wheel 0.37.0 pyhd3eb1b0_1
wincertstore 0.2 py37haa95532_2
xz 5.2.5 h62dcd97_0
yapf 0.32.0 pypi_0 pypi
zipp 3.7.0 pypi_0 pypi
zlib 1.2.11 h8cc25b3_4
zstd 1.4.9 h19a0ad4_0
For pictures I ran the test code :
(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_000 results/demo_000
2022-01-07 06:36:34,070 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
For video I ran the test code setting the --max_seq_len=2 :
(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --max_seq_len=2 --fps=12.5
2022-01-07 06:38:02,236 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 130, in main
cv2.destroyAllWindows()
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1268: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'
it gave this error.
For video I ran the test code default :
(realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5
2022-01-07 06:40:14,850 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 117, in main
outputs = model(inputs, test_mode=True)['output'].cpu()
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\runner\fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\srgan.py", line 95, in forward
return self.forward_test(lq, gt, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\real_esrgan.py", line 211, in forward_test
output = _model(lq)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\real_basicvsr_net.py", line 87, in forward
outputs = self.basicvsr(lqs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 126, in forward
flows_forward, flows_backward = self.compute_flow(lrs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 98, in compute_flow
flows_backward = self.spynet(lrs_1, lrs_2).view(n, t - 1, 2, h, w)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 346, in forward
input=self.compute_flow(ref, supp),
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 315, in compute_flow
], 1))
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 420, in forward
return self.basic_module(tensor_input)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\cnn\bricks\conv_module.py", line 201, in forward
x = self.conv(x)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 420, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 4.35 GiB (GPU 0; 11.00 GiB total capacity; 4.37 GiB already allocated; 2.46 GiB free; 7.07 GiB reserved in total by PyTorch)
it gave an oom error.
System is Win 10 64 bit, 1080 ti =11 gb.
model is in the right folder and the environment is done with conda with the given commands in order.
The generate_video_demo.py only get frames to start_frame
During startup, I encountered many errors from mmcv imports (.../site-packages/mmcv/_ext.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN2at6detail10noopDeleteEPv) to #27.
It may be worth adding an alternative dependency installation instructions for NONconda env and CUDA11.1 that will allow you to run the program on a clean environment:
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html
pip install mmedit
Thats work for me and it's up to you to add it. But this issue may help someone. Thanks for the great repository, it helped me a lot!
Hi, thanks for sharing a great job. I am interested in the module about cleaning, may I ask if it's a pre-processing? could you share the code or details? thanks a lot.
I have been trying to upscale 1920X1080 video.
It seems that my gpus can't load the model due to lack of capacity.
I am using 32GB V100 Tesla gpus.
When I tried it with smaller size video, it worked.
Is it right that higher gpu memory is required for bigger size videos?
And, would it be possible to upscale 1920X1080 video with my gpus?
Hello,I want to use 2 GPUs to inference, But I don't know how to run it in 2 GPUs, thanks.
Hey there!
Does the pre-trained model is suitable for mobile from a memory/CPU perspective? Or it's meant to be run on heavier machines. Thanks!
my pc 32g memory,when run inference, the memory exhaust,then the progress killed.
how many memory needed?
Hi. First of all, thank you very much for your project. The quality is impressive. I'm trying to train a neural network, but after every save checkpoint it starts some long process for 300 iterations. It looks like evaluating, but I couldn't find a value of 300 in the config file. I train on the REDS dataset (24k images) and that process takes longer than the training itself for 10k iterations. What is it? Is there any way to reduce this value (300)? Is it possible to disable it and what is the risk?
"loss_pix" and "loss_clean" is nan every time I train for a while
I follow the instructions below:
Put the original REDS dataset in ./data
Run the following command:
python crop_sub_images.py --data-root ./data/REDS --scales 4
and training model follow the instructions
mim train mmedit configs/realbasicvsr_wogan_c64b20_2x30x8_lr1e-4_300k_reds.py --gpus 2 --launcher pytorch
my cuda is Cuda 11.1 RTX3090,do not support cuda 11.1?
Hi,
Thanks for sharing this code.
I tried to use my own sample video (mp4), but I've got a GPU memory issue. is there any restriction on the input file format or length?
This is the code I used to test
python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/test.mp4 results/demo_001.mp4 --fps=30 --max_seq_len=20
I am a colab user.
I uploaded my own images to the folder "demo_000" and tried to infer, but it didn't work.
The inference worked for the pre-prepared images.
Here is traceback.
"""
2022-01-13 13:16:36,348 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19
load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 144, in
main()
File "inference_realbasicvsr.py", line 97, in main
inputs = torch.stack(inputs, dim=1)
RuntimeError: stack expects each tensor to be equal size, but got [1, 3, 352, 352] at entry 0 and [1, 3, 604, 900] at entry 2
\n# show the first image as an example\nimport mmcv\nimport matplotlib.pyplot as plt\n\nimg_input = mmcv.imread('data/demo_000/00000000.png', channel_order='rgb') \nimg_output = mmcv.imread('results/demo_000/00000000.png', channel_order='rgb') \n\nfig = plt.figure(figsize=(25, 20))\nax1 = fig.add_subplot(2, 1, 1) \nplt.title('Input image', fontsize=16)\nax1.axis('off')\nax2 = fig.add_subplot(2, 1, 2)\nplt.title('RealBasicVSR output', fontsize=16)\nax2.axis('off')\nax1.imshow(img_input)\nax2.imshow(img_output)\n
"""
Note that the current ipynb file does not have a script to generate "results/demo_000".
Please add mkdir codes if you can.
Hi, thanks for the excellent work!
But could you please release the dataset on Google Drive, too? I can't download the dropbox link in China...
Thanks very much!
When training video material, the memory and video memory resources are always insufficient. Is there any parameter to solve this problem? almost 1min .mp4 25fps, 3MB, running on the 12GBRAM 12GvRam, will meet resources lack
or how should I deal with the input video before to more easily run the code
ImportError: DLL load failed while importing _ext
Hello,when will the train code be released?I hope to get the x2 pre-trained model through train.thanks.
Hi, @ckkelvinchan when will the x2 and x3 models be released? thank you very much.
I used the official NIQE code to evaluate the demo_000 and the result, got a unexpected result, as the niqe value of the raw video is 3.9829 while the sr video is 4.3407. I just input every frame and calculate the average value.
I don't know where is wrong, as this result is totally opposite towards that in paper.
Hi. When would you release the training code? Could you provide a tentative date for releasing the training code? Also, do you have pretrained_weights for the x2 model?
when using
mim install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.10.0/index.html
I am looking to setup a space on huggingface spaces(https://huggingface.co/spaces) for this model at https://huggingface.co/spaces/akhaliq/RealBasicVSR
I was able to get the model working in colab but the space does not support cuda, is there a way around this thanks?
space code: https://huggingface.co/spaces/akhaliq/RealBasicVSR/blob/main/app.py#L4
I found process a video with x4 model is very slow, nearly 20min for a 10s video.
So if i train a x2 model, can it process faster?
And how to train a x2 model?
I want an information about Colab Demo environment specification.
I just know RAM and DISK capacity.
Can you notice the specification about Colab Goolge Compute Engine GPU ?(Name and any other information).
Exception has occurred: RuntimeError
RealBasicVSR: PerceptualLoss: storage has wrong size: expected 0 got 1728
During handling of the above exception, another exception occurred:
File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\restorers\real_basicvsr.py", line 65, in init
super().init(generator, discriminator, gan_loss, pixel_loss,
During handling of the above exception, another exception occurred:
File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 20, in build
return build_from_cfg(cfg, registry, default_args)
File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 58, in build_model
return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 67, in init_model
model = build_model(config.model, test_cfg=config.test_cfg)
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 81, in main
model = init_model(args.config, args.checkpoint)
File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 149, in
main()
Can you share the code/method for creating the demo videos? Thx.
Hi. I am installed that:
conda create -n vsr3 python=3.7 -y
conda activate vsr3
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch -y
conda install -c omnia openmm -y
#conda install -c esri mmcv-full -y
pip install mmcv-full==1.3.17 -f https://download.openmmlab.com/mmcv/dist/11.1/torch1.10.0/index.html
python3 -m pip install mmedit
Then i run:
python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5
Then get an error:
Traceback (most recent call last):
File "inference_realbasicvsr.py", line 148, in <module>
main()
File "inference_realbasicvsr.py", line 80, in main
model = init_model(args.config, args.checkpoint)
File "inference_realbasicvsr.py", line 64, in init_model
config.model.pretrained = None
File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 507, in __getattr__
return getattr(self._cfg_dict, name)
File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 48, in __getattr__
raise ex
AttributeError: 'ConfigDict' object has no attribute 'model'
Also i check in jupyter notebook the object:
config = mmcv.Config.fromfile(config)
And config contains:
Config (path: configs/realbasicvsr_x4.py): {'argparse': <module 'argparse' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/argparse.py'>, 'os': <module 'os' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/os.py'>, 'osp': <module 'posixpath' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/posixpath.py'>, 'sys': <module 'sys' (built-in)>, 'Pool': <bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f8f1c640d10>>, 'cv2': <module 'cv2' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/cv2/__init__.py'>, 'mmcv': <module 'mmcv' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/__init__.py'>, 'np': <module 'numpy' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/numpy/__init__.py'>, 'worker': <function worker at 0x7f8e83b0a170>, 'extract_subimages': <function extract_subimages at 0x7f8e83b0a200>, 'main_extract_subimages': <function main_extract_subimages at 0x7f8e83b0a440>, 'parse_args': <function parse_args at 0x7f8e83b0a050>}
Thanks for your great work, but it seems to lack a related file (crop_sub_images.py) in this project for training. Could you upload this file? I would appreciate it.
hi:
could RealBasicVSR support CPU reasoning? if the answer is yes, can you give me some advice on how to make it happen.
waiting for your reply
I have followed the instructions listed in the README and completed the training. However, I ran into a few issues during the training.
No changes have been made to the source code. The code commit ID used is fa3d328 from Jan 17, 2022.
Could you please let me know how this can be resolved? Also please let me know if any more information is required from my side in order to debug this.
Hi @ckkelvinchan, I was wondering how did you choose the weights for the losses of the second training of RealBasicVSR. I mean how did you choose
The author gives the perceptual effect with different clean module and loss in figure 4. But these are not the quantitative results correspondingly. I mean, in Table 2, BasicVSR++ item shows the results training with bicubic. How about the results of BasicVSR++ training with the degradation data produced by "second-order order degradation model"? And why BasicVSR++ is used as the reference but not the BasicVSR?
Hi,when I run the program of "Video as input and output" and meet the problem in title.What is the solution to this problem?
Running the image model will cause the memory to kill. How to solve it? I have converted the video to a picture ,64gb memory still can be adjusted in the kill configuration?64gb memory still can be adjusted in the kill configuration? And the 24gb video memory will also report insufficient, how to solve it? much needed, thank you
When trying to train on multiple GPUs the error:
ValueError: You may use too small dataset and our distributed sampler cannot pad your dataset correctly. We highly recommend you to use fewer GPUs to finish your work
I follow the instructions below:
Put the original REDS dataset in ./data
Run the following command:
python crop_sub_images.py --data-root ./data/REDS --scales 4
and training model follow the instructions
mim train mmedit configs/realbasicvsr_wogan_c64b20_2x30x8_lr1e-4_300k_reds.py --gpus 2 --launcher pytorch
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.