Giter Club home page Giter Club logo

rexnet's Introduction

(NOTICE) All the ReXNet-lite's model files have been updated!

(NOTICE) Our paper has been accepted at CVPR 2021!! The paper has been updated at arxiv!

Rethinking Channel Dimensions for Efficient Model Design

Dongyoon Han, Sangdoo Yun, Byeongho Heo, and YoungJoon Yoo | Paper | Pretrained Models

NAVER AI Lab

Abstract

Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications.

Model performance

  • We first illustrate our models' top-acc. vs. computational costs graphs compared with EfficientNets

Performance comparison

ReXNets vs EfficientNets

  • The CPU latencies are tested on Xeon E5-2630_v4 with a single image and the GPU latencies are measured on a V100 GPU with the batchsize of 64.

  • EfficientNets' scores are taken form arxiv v3 of the paper.

    Model Input Res. Top-1 acc. Top-5 acc. FLOPs/params. CPU Lat./ GPU Lat.
    ReXNet_0.9 224x224 77.2 93.5 0.35B/4.1M 45ms/20ms
    EfficientNet-B0 224x224 77.3 93.5 0.39B/5.3M 47ms/23ms
    ReXNet_1.0 224x224 77.9 93.9 0.40B/4.8M 47ms/21ms
    EfficientNet-B1 240x240 79.2 94.5 0.70B/7.8M 70ms/37ms
    ReXNet_1.3 224x224 79.5 94.7 0.66B/7.6M 55ms/28ms
    EfficientNet-B2 260x260 80.3 95.0 1.0B/9.2M 77ms/48ms
    ReXNet_1.5 224x224 80.3 95.2 0.88B/9.7M 59ms/31ms
    EfficientNet-B3 300x300 81.7 95.6 1.8B/12M 100ms/78ms
    ReXNet_2.0 224x224 81.6 95.7 1.8B/19M 69ms/40ms

ReXNet-lites vs. EfficientNet-lites

  • ReXNet-lites do not use SE-net an SiLU activations aiming to faster training and inference speed.

  • We compare ReXNet-lites with EfficientNet-lites.

  • Here the GPU latencies are measured on two M40 GPUs, we will update the number run on a V100 GPU soon.

    Model Input Res. Top-1 acc. Top-5 acc. FLOPs/params CPU Lat./ GPU Lat.
    EfficientNet-lite0 224x224 75.1 - 0.41B/4.7M 30ms/49ms
    ReXNet-lite_1.0 224x224 76.2 92.8 0.41B/4.7M 31ms/49ms
    EfficientNet-lite1 240x240 76.7 - 0.63B/5.4M 44ms/73ms
    ReXNet-lite_1.3 224x224 77.8 93.8 0.65B/6.8M 36ms/61ms
    EfficientNet-lite2 260x260 77.6 - 0.90B/ 6.1M 48ms/93ms
    ReXNet-lite_1.5 224x224 78.6 94.2 0.84B/8.3M 39ms/68ms
    EfficientNet-lite3 280x280 79.8 - 1.4B/ 8.2M 60ms/131ms
    ReXNet-lite_2.0 224x224 80.2 95.0 1.5B/13M 49ms/90ms

ImageNet-1k Pretrained models

ImageNet classification results

  • Please refer the following pretrained models. Top-1 and top-5 accuraies are reported with the computational costs.

  • Note that all the models are trained and evaluated with 224x224 image size.

    Model Input Res. Top-1 acc. Top-5 acc. FLOPs/params
    ReXNet_1.0 224x224 77.9 93.9 0.40B/4.8M
    ReXNet_1.3 224x224 79.5 94.7 0.66B/7.6M
    ReXNet_1.5 224x224 80.3 95.2 0.88B/9.7M
    ReXNet_2.0 224x224 81.6 95.7 1.5B/16M
    ReXNet_3.0 224x224 82.8 96.2 3.4B/34M
    ReXNet-lite_1.0 224x224 76.2 92.8 0.41B/4.7M
    ReXNet-lite_1.3 224x224 77.8 93.8 0.65B/6.8M
    ReXNet-lite_1.5 224x224 78.6 94.2 0.84B/8.3M
    ReXNet-lite_2.0 224x224 80.2 95.0 1.5B/13M

Finetuning results

COCO Object detection

  • The following results are trained with Faster RCNN with FPN:

    Backbone Img. Size B_AP (%) B_AP_0.5 (%) B_AP_0.75 (%) Params. FLOPs Eval. set
    FBNet-C-FPN 1200x800 35.1 57.4 37.2 21.4M 119.0B val2017
    EfficientNetB0-FPN 1200x800 38.0 60.1 40.4 21.0M 123.0B val2017
    ReXNet_0.9-FPN 1200x800 38.0 60.6 40.8 20.1M 123.0B val2017
    ReXNet_1.0-FPN 1200x800 38.5 60.6 41.5 20.7M 124.1B val2017
    ResNet50-FPN 1200x800 37.6 58.2 40.9 41.8M 202.2B val2017
    ResNeXt-101-FPN 1200x800 40.3 62.1 44.1 60.4M 272.4B val2017
    ReXNet_2.2-FPN 1200x800 41.5 64.0 44.9 33.0M 153.8B val2017

COCO instance segmentation

  • The following results are trained with Mask RCNN with FPN, S_AP and B_AP denote segmentation AP and box AP, respectively:

    Backbone Img. Size S_AP (%) S_AP_0.5 (%) S_AP_0.75 (%) B_AP (%) B_AP_0.5 (%) B_AP_0.75 (%) Params. FLOPs Eval. set
    EfficientNetB0_FPN 1200x800 34.8 56.8 36.6 38.4 60.2 40.8 23.7M 123.0B val2017
    ReXNet_0.9-FPN 1200x800 35.2 57.4 37.1 38.7 60.8 41.6 22.8M 123.0B val2017
    ReXNet_1.0-FPN 1200x800 35.4 57.7 37.4 38.9 61.1 42.1 23.3M 124.1B val2017
    ResNet50-FPN 1200x800 34.6 55.9 36.8 38.5 59.0 41.6 44.2M 207B val2017
    ReXNet_2.2-FPN 1200x800 37.8 61.0 40.2 42.0 64.5 45.6 35.6M 153.8B val2017

Getting Started

Requirements

  • Python3
  • PyTorch (> 1.0)
  • Torchvision (> 0.2)
  • NumPy

Using the pretrained models

  • timm>=0.3.0 provides the wonderful wrap-up of ours models thanks to Ross Wightman. Otherwise, the models can be loaded as follows:

    • To use ReXNet on a GPU:
    import torch
    import rexnetv1
    
    model = rexnetv1.ReXNetV1(width_mult=1.0).cuda()
    model.load_state_dict(torch.load('./rexnetv1_1.0.pth'))
    model.eval()
    print(model(torch.randn(1, 3, 224, 224).cuda()))
    • To use ReXNet-lite on a CPU:
    import torch
    import rexnetv1_lite
    
    model = rexnetv1_lite.ReXNetV1_lite(multiplier=1.0)
    model.load_state_dict(torch.load('./rexnet_lite_1.0.pth', map_location=torch.device('cpu')))
    model.eval()
    print(model(torch.randn(1, 3, 224, 224)))

Training own ReXNet

ReXNet can be trained with any PyTorch training codes including ImageNet training in PyTorch with the model file and proper arguments. Since the provided model file is not complicated, we simply convert the model to train a ReXNet in other frameworks like MXNet. For MXNet, we recommend MXnet-gluoncv as a training code.

Using PyTorch, we trained ReXNets with one of the popular imagenet classification code, Ross Wightman's pytorch-image-models for more efficient training. After including ReXNet's model file into the training code, one can train ReXNet-1.0x with the following command line:

./distributed_train.sh 4 /imagenet/ --model rexnetv1 --rex-width-mult 1.0 --opt sgd --amp \
 --lr 0.5 --weight-decay 1e-5 \
 --batch-size 128 --epochs 400 --sched cosine \
 --remode pixel --reprob 0.2 --drop 0.2 --aa rand-m9-mstd0.5 

Using droppath or MixUP may need to train a bigger model.

License

This project is distributed under MIT license.

How to cite

@misc{han2021rethinking,
      title={Rethinking Channel Dimensions for Efficient Model Design}, 
      author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
      year={2021},
      eprint={2007.00992},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

rexnet's People

Contributors

developer0hye avatar dyhan0920 avatar jackerz312 avatar rentainhe avatar seonho avatar timbyxty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rexnet's Issues

Latency, throughput, GPU performance

Since ReXNet is based on MNASnet architectures(mobile et v1 and v2) I guess that they suffer from the same low throughput low GPU performance issue.
Can you provide any numbers?

I am specifically interested in per image and per batch latency, throughput(images/sec) depending on hardware such as cpu, arm processor, GPU etc.

I know this is a lot to ask for but I believe that it will be valuable to others researchers too and any kind of numbers would be helpful.

Thankfully, this paper focuses more on the design principles which maybe applicable to other gpu friendly sota architectures such as Tresnet or Resnest.

안녕하세요
좋은 논문과 코드 감사드립니다.
논문에서 주로 제시된 rexnet들은 mobilenet기반이라 원래 모델의 gpu 성능 한계를 그대로 유지할 것이라고 생각됩니다.

이에대한 비교가 이루어졌나요? 실제 수치(gpu 배치 크기당 초당 이미지 처리속도 등)가 제공된다면 감사하겠습니다.

사실 그부분에 제시된 모델자체는 제한이있더라도 설계방법론자체는 다른 gpu 효과적 모델에 적용이가능할것같아서 고무적입니다.

감사합니다.

COCO ReXNet Model

Hi @dyhan0920 and other Contributors.
Thank you for making public such a good repo.

It's totally amazing that ReXNet models are better than Efficient Net models. Looks like we got another SOTA model!!!

I saw the profiling in ImageNet & COCO dataset. I want to try evaluating those models but was not able to for COCO. So, can you provide the weights for COCO ReXNet version models?

extract features

model = ReXNetV1(width_mult=1.5)
fea = model.extract_features(18)

TypeError: conv2d() received an invalid combination of arguments - got (int, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:

  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (int, Parameter, NoneType, tuple, tuple, tuple, int)
  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (int, Parameter, NoneType, tuple, tuple, tuple, int)
    How to apply to segmentation

Counting FLOPs

Thanks for sharing your wonderful work.

I am curious about counting FLOPs.
I found this but It shows higher FLOPs when I use HardSwish instead of Swish.
Can you share your FLOPs counting script?

Thank you very much

ReXNet_V1-2.0x weight 사용시 문제

rexnet

위의 코드에서 2.0x weight를 사용시 에러가 발생합니다.

동일한 코드에서 1.0x weight를 사용하였을떄는 문제가 발생하지 않았습니다.

그래서 제 코드가 아닌 다른 부분에서 에러가 있을것으로 예상이 됩니다.

pretrained models prolem

When I run

import torch
import rexnetv1

model = rexnetv1.ReXNetV1(width_mult=1.0).cuda()
model.load_state_dict(torch.load('./rexnetv1_1.0x.pth'))
model.eval()
print(model(torch.randn(1, 3, 224, 224).cuda()))

I met this problem:
TypeError:init() takes from 1 to 2 positional arguments but 3 were given

I did not change anything, how to solve it?

Improvements for ResNet

Hi,

Thanks for the great work. I have several questions.

First, do the numbers in Table 7 include the training techniques mentioned in Appendix B.2?
Second, I'm wondering why are the improvements for ResNet50 and VGG16 much smaller than that of MobileNets. (0.8% and 0.2% compared to 4%).

Thanks,
Rudy

Comparison with RegNet

Edit : (I separated this question from a previous issue #3 )

This paper considers network design spaces similar to the approach taken in the recent RegNet paper (Designing Network Design Spaces). Are the principles from that paper congruent to yours?

본 논문에서와 같은 설계방법론에 대한 논의는 이전에 fair에서 나온 RegNet paper (Designing Network Design Spaces)를 연상시키는데요, 해당 논문의 접근과의 어떻게 비교되는지요?

transforn to onnx occur warning

hi
Itransfer rexnet to onnx and occur warning as below:
F:\rexnetv1.py:122: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator add_. This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
out[:, 0:self.in_channels] += x
F:\rexnetv1.py:122: TracerWarning: There are 4 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
out[:, 0:self.in_channels] += x

could you help me?
I think it make the result of onnx is different from pytorch

Unstable training

Hi,
thanks for your great work.Now I used the rexnet as my backbone for my own classification task,I found that the backbone is not stable(maybe with the maximum 1% of val_acc).I used the Adam optimizer and the ExponentialLR as the lr scheduler, without the warmup strategy. I wond konw that how to make stable training,can you give me some guidence? Looking forward to your reply.

ONNX export failed: Couldn't export Python operator SiLUJitImplementation

Thanks for sharing your wonderful work.

When I convert to onnx, I encounter a problem: ONNX export failed: Couldn't export Python operator SiLUJitImplementation.
The convert code is as follow:

def pth2onnx(model_name, model, input_shape):
input_name = ["input"]
output_name = ["output"]
input = Variable(torch.randn(input_shape)).cpu()
m_model = model.cpu()
test_path = r'./'
torch.onnx.export(m_model, input, model_name, input_names=input_name, output_names=output_name, verbose=False, opset_version=11)

Use the lastest code

Model Initialization

If possible, could you please share the exact initialization parameters passed into ReXNetV1 in order to create ReXNetV1_2.0? The options (and defaults) are:

  • input_ch=16
  • final_ch=180
  • width_mult=1.0
  • depth_mult=1.0
  • classes=1000
  • use_se=True
  • se_ratio=12
  • dropout_ratio=0.2
  • bn_momentum=0.9

My understanding is that width_mult should be set to 2. However doing so and them attempting to load the provided model weights for the -2.0 model results in many unaligned saved weights vs declared model weights. The paper isn't straightforward in providing guidance in this regard either, but that can be resolved easily, I think, the way ResNet and EfficientNet have convenience methods to build each version of their network, e.g.: https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L232

404

In README RexNet 1.3, 1.5 and 2.0 redirect to 404 error.

rexnetv1을 공부하고 구현하면서 몇가지 의문점이 생겨 문의드립니다.

안녕하십니까

연구 하시느라 고생이 많으십니다.
최근에 rexnetV1 구현을 직접 손으로 구현해보면서 공부를하고 있는 직장인입니다.
손으로 코드를 구현하다가 몇가지 의문이 생겨서 조심스럽게 문의드립니다.

  1. stem_channel 또한 width_multi에 영향을 받도록 설계를 하신건가요?

image

제가 이해한 바로는 width_multi가 1.0이하 일때 영향을 받고 1.0 이상일때는 32로 고정되도록 의도했다고 이해했습니다.
그런데 width_multi를 나누시는 이유가 이해가 되지 않습니다.
아니면 다른 이유가 있으신지요?

  1. 왜 nn.Dropout2d가 아니라 nn.Dropout인가?

image

이 또한 제가 이해한 바로는 nn.Dropout이 연결되는 시점이 flattern되지 않은 BxCxHxW feature맵 형태의 시점인걸로 보입니다.
그런데 사용자가 원하는 클래스 n개의 갯수로 컨볼루션이 될 것이고 드랍아웃이 될것인데 그러면 채널 갯수만큼 드랍아웃이 되어야 할것으로 보입니다.
nn.Dropout이라면 채널 갯수만큼 드랍아웃이 되지 않고 의도되지 않는 동작을 할 것으로 보이는데 문제가 되지 않나요?
(nn.Dropout은 BxK인 flatten인 형태일때 k가 p의 비율만큼 드랍아웃되는 것이 아닌가요?)
혹시 제가 잘못 이해하고있거나 다른 의도가 있으시다면 좀 조언 부탁드립니다.

답변 기다리겠습니다 좋은 논문 만들어 주셔서 감사합니다.

RandAug + EraseAug + SE Block + Swish ?

In my work, I am in the process of verifying RexNet to use as a factor in the in-house model tuning.

You have trained RexNet models with RandAug, EraseAug, SE Blocks and SiLU(Swish) Activations.

Those mentioned above were not used in mobilenet-v2 training.

Since you argue that adjusting the number of channels per layers in mobilenet-v2 is important factor to improve the performance, I have trained RexNet without those techniques. Then I got 72.9-73.2% top-1 accuracy, To check if it was a training problem, I trained RexNet with the above technique on, and it came out similar to the paper.

https://github.com/ildoonet/pytorch-image-models

so the questions are,

  1. The argument 'Diminishing Representational Bottleneck' by adjusting the channel size seems to be uncertain from the paper, what do you think?

  2. Have you tried to train mobilenet-v2 models with above techniques, without adjusting channel sizes?

Thanks. Looking forward to your response.

Training recipe

Thanks for sharing your code.

I tried to train my own ReXNet using the recipe the repo provided:

./distributed_train.sh 4 /imagenet/ --model rexnetv1 --rex-width-mult 1.0 --opt sgd --amp \
 --lr 0.5 --weight-decay 1e-5 \
 --batch-size 128 --epochs 400 --sched cosine \
 --remode pixel --reprob 0.2 --drop 0.2 --aa rand-m9-mstd0.5 

In your paper, it shows you used the ReXNet with stochastic depth rate of 0.2. However, the provided recipe does not used stochastic depth drop.

My question is that, in order to re-produce the results, do I need to use the stochastic depth drop?

pretrain rexnet lite

Thank you so much for awesome models.
Could you please provide me rexnet_lite_1.3.pth,... ?
Thank you in advance.

Rank Expansion & Training

While programming a Tensorflow implementation of ReXNet, I had a couple of questions regarding your proposed rank expansion and training methodologies.

First Question

First of all, regarding your implementation of the [Linear Bottleneck] residual connection, the code says it aggregates the input into the first layers of the bottleneck with the dimensions equal to the input_channels already inside the sequential output.

if self.use_shortcut:
    out[:, 0:self.in_channels] += x

However, when reading the ResNet paper, the authors of the residual block said that when the input channels and output channels are different, it is necessary for x to be linearly projected to the same shape and then added to the residual function. My implementation of this can be seen as follows:

# Tensorflow implementation of ReXNet
# y = the linear bottleneck model (ConvBNSwish + ConvAct + Squeeze + etc.)
if use_shortcut:
    x = Conv2D(filters=self.out_channels, strides=self.stride,...)(_input)
    x = BatchNormalization(...)(x)
    y += x
return y

I was wondering if there was a particular reason why you did not opt to use a linear projection on the input, and if there are any significant performance differences between the two methodologies.

Second Question

Furthermore, I noticed in the [Linear Bottleneck] code that you only provided a depthwise operation without a pointwise convolution. The code provided says

ConvBNAct(out, in_channels=dw_channels, channels=dw_channels, kernel=3, stride=stride, pad=1,
                  num_group=dw_channels, active=False)

Does this mean this is not a depthwise separable convolution? Or is this also a method to reduce the representational bottleneck?

Thank you!



안녕하세요,

좋은 논문과 그에 대한 코드를 제공해주셔서 감사합니다. Mobilenet에 이어서 경량화 뉴럴넷을 찾아보는 중 ReXNet을 발견하고 좋은 성능을 보여줬기에 관심이 생겼습니다. 그래서 실제로 모바일 핸드폰에 돌리기 위해 Tensorflow로 구현해보면서 몇 가지 질문들이 생겨서 이슈를 남깁니다.

감사합니다!

Rexnet-lite?

Are there any plans to update the rexnet-lite model?

the reslut of MNN and onnx is different from pytorch

HI
Thansk for for your great work.
I use rexnet to train classify net.
and I transfer it from Pytorch -> ONNX -> MNN.
the result of onnx and mnn are the same, but they are different from pytorch result.

Does rexnet can transform to ONNX normally and maintain result same?

One problem

In rexnetv1.py, Is there any error on "layers = [ceil(0 * depth_mult) for element in layers]" (135 lines)?

GPU memory

Dear all,

Thanks for a nice work.
I have a question about used GPU memory when running the code for inference mode.

I compared trainable parameters and GPU memory between the ResNet50 in the torchvision from Facebook and ReXNetV1 for torch tensor whose shape is [1, 3, 1080, 1920]. As a result, the ReXNetV1 has fewer trainable parameters, but it requires more GPU memory compared to the ResNet50.
Please note that the the parameter of the ReXNetV1, width_mult, is 1.0.
The GPU memory usage is checked by the command, nvidia-smi when running the below code.

  • Model parameter

    • ReXNetV1: 4,796,873
    • ResNetV50: 25,557,032
  • GPU memory

    • ReXNetV1: 9,723MiB
    • ResNetV50: 7,819MiB
  • Experiment environment

    • OS: Linux 16.0.4
    • torch version: 1.5.1
    • torchvision version: 0.6.1
    • GPU: single NVIDIA Titan RTX

Thus, I wonder why the ReXNetV1 requires more memory than the ResNet50. In other words, I wonder which module in the ReXNet V1 seems to use the most memory.
For reference, the code used in the experiment is below.

import torch
import torchvision.models as models
import rexnetv1

# Please select the model to be used in the experiment to measure GPU memory usage. 
# Comment out the model to be unused. (e.g # model = models.resnet50(pretrained=True).to('cuda'))
# Option 1: RexNetV1
model = rexnetv1.ReXNetV1(width_mult=1.0).to('cuda: 0')
model.load_state_dict(torch.load('./rexnetv1_1.0x.pth'))
# Option 2: ResNet50
# model = models.resnet50(pretrained=True).to('cuda: 0')
model.eval()


x = torch.randn([1, 3, 1080, 1920], dtype=torch.float).to('cuda: 0')

for idx in range(100):
    y = model(x)

print('Model params.: {}'.format(sum(p.numel() for p in model.parameters() if p.requires_grad)))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.