facebookresearch / dinov2 Goto Github PK
View Code? Open in Web Editor NEWPyTorch code and models for the DINOv2 self-supervised learning method.
License: Apache License 2.0
PyTorch code and models for the DINOv2 self-supervised learning method.
License: Apache License 2.0
I'm not able to find code for Semantic segmentation. In the paper it's written that:
a linear layer is trained to predict class logits from a patch tokens. It is used to produce a low-
resolution logit map (eg 32x32 for a model with patch size 16), which is then upsampled to full resolution
(512x512) to obtain a segmentation map.
Does this mean a Linear layer with 32*32 = 1024 output classes need to be trained? What about n_last_blocks_list = [1, 4]
and n_last_blocks = max(n_last_blocks_list)
? Does that need to be changed to n_last_blocks_list = [1, 1]
and n_last_blocks = max(n_last_blocks_list)
?
Is there any sample code for semantic segmentation ?
Thanks for your great work and it impresses me !
I wanna to have a try in my research.
Specificly, I wanna to use ViT-small to replace the ImageNet pretrained backbone for monocular 3D object detection task. The parameters of. these two networks are comparable and I thought the performance of DINOv2 pretrained ViT-small would be higher.
However, the result shows that the performance of DINOv2 pretrained ViT-small is 20% lower, and the loss is hard to converge.
Since I have fine-tuned learning rate , whatelse can I do to make the ViT backbone avaible ?
I think there might be a bug in the forward pass for the _LinearClassifierWrapper of hubconf.py, such that it throws an error when passing a batch of images through any of the linear classifier models (e.g., dinov2_vits14_lc).
Here's a quick test, where expected output size 10x1000 (bs x num_classes), but this throws an error:
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
classifier = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_lc', layers=4, pretrained=True)
classifier.to(device)
classifier.eval()
with torch.no_grad():
x = torch.rand(10,3,224,224).to(device)
out = classifier(x)
print(out.shape)
==> RuntimeError: mat1 and mat2 shapes cannot be multiplied (296x384 and 1920x1000)
I believe the issue is caused by this bit of the forward function of the _LinearClassifierWrapper class
...
linear_input = torch.cat([
x[0][1].squeeze(0),
x[1][1].squeeze(0),
x[2][1].squeeze(0),
x[3][1].squeeze(0),
x[3][0].squeeze(0).mean(0)
])
...
This squeezes the batch dimension out, but I think it should instead be the following to retain the batch dimension and concatenate along the 1st dimension :
linear_input = torch.cat([
x[0][1],
x[1][1],
x[2][1],
x[3][1],
x[3][0].mean(1)
], dim=1)
After making this change I'm able to reproduce the linear evaluation top1 accuracy scores reported in the README.md
Here's the patch I used to test this fix:
import torch
from functools import partial
classifier = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_lc', layers=4, pretrained=True)
def forward(self, x):
if self.layers == 1:
x = self.backbone.forward_features(x)
cls_token = x["x_norm_clstoken"].squeeze(0)
patch_tokens = x["x_norm_patchtokens"].squeeze(0)
linear_input = torch.cat([
cls_token,
patch_tokens.mean(0)
])
elif self.layers == 4:
x = self.backbone.get_intermediate_layers(x, n=4, return_class_token=True)
linear_input = torch.cat([
x[0][1],
x[1][1],
x[2][1],
x[3][1],
x[3][0].mean(1)
], dim=1)
else:
assert False, f"Unsupported number of layers: {self.layers}"
return self.linear_head(linear_input)
classifier.forward = partial(forward, classifier)
classifier.to(device);
@facbookresearch/dinvo2 les marocains c'est des boss et des star de la chance
cordialement Mme. Yacine zohair ```
#yacinelemarocain
Your work is impressive and thanks for your code release.
I got a question about linear semantic segmentation. In your paper, an upsampling operation is after the linear layer, is it just a interpolate operation like F.interpolate()
, also mentioned in #25 ? If it is, from my understanding, it is an interpolation of the class probability computed by linear layer, is it right?
I'm trying to export the model into .onnx
.
Here's my code:
import torch
model = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14').to('cuda:0')
model.eval()
# Generate some input data
input_data = torch.randn(1, 3, 224, 224).to('cuda:0')
# Pass the input data through the model
output = model(input_data)
torch.onnx.export(model, input_data, 'model.onnx')
I got an error
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-17-2f8453f4374c>](https://localhost:8080/#) in <cell line: 1>()
----> 1 torch.onnx.export(model, input_data, 'model.onnx')
13 frames
[~/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/models/vision_transformer.py](https://localhost:8080/#) in prepare_tokens_with_masks(self, x, masks)
193 x = self.patch_embed(x)
194 if masks is not None:
--> 195 x = torch.where(masks.unsqueeze(-1), self.mask_token.to(x.dtype).unsqueeze(0), x)
196
197 x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Did I miss anything?
https://github.com/facebookresearch/dinov2/blob/main/dinov2/data/datasets/image_net.py#L103-L110
what does those file come from?
f"entries-{split.value.upper()}.npy"
f"class-ids-{split.value.upper()}.npy"
f"class-names-{split.value.upper()}.npy"
Hello,
Have you considered using the ConvNext architecture for training DINOv2?
ConvNext has shown to have improved performance and lower latency on tasks such as CLIP. For example, in the open_clip repository, ConvNext-L@320 achieves better results with a +1.4% increase in zero-shot accuracy and is more than twice as fast as ViT-L@336.
While ViT may be easier to use in a "tokenize everything for my transformer world" approach, it's worth considering that CNNs still deserve... attention ^^
Best,
Simon
Hello, I have recently come across the demo on instance retrieval in this repository, and I'm very interested in implementing a similar feature in my own project. However, I'm having some difficulties understanding the exact steps and required components to achieve this.
Can you please provide some guidance or documentation on how to replicate the instance retrieval functionality as demonstrated in the demo? Specifically, I'd like to know about the following:
I appreciate any help or direction you can provide. Thank you!
Is the model training based on supervised learning or unsupervised learning?
Can somebody please explain this? (Windows 10 + Anaconda)
Requirement already satisfied: xformers in c:\users\atc\appdata\roaming\python\python39\site-packages (0.0.18)
versus
ResolvePackageNotFound:
- xformers::xformers=0.0.18
The full log:
(base) E:\dino-v2>conda env create -f conda.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- xformers::xformers=0.0.18
(base) E:\dino-v2>pip install xformers
Requirement already satisfied: xformers in c:\users\atc\appdata\roaming\python\python39\site-packages (0.0.18)
Requirement already satisfied: pyre-extensions==0.0.23 in c:\users\atc\appdata\roaming\python\python39\site-packages (from xformers) (0.0.23)
Requirement already satisfied: torch==2.0.0 in c:\users\atc\appdata\roaming\python\python39\site-packages (from xformers) (2.0.0)
Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from xformers) (1.21.5)
Requirement already satisfied: typing-inspect in c:\users\atc\appdata\roaming\python\python39\site-packages (from pyre-extensions==0.0.23->xformers) (0.8.0)
Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from pyre-extensions==0.0.23->xformers) (4.3.0)
Requirement already satisfied: sympy in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (1.10.1)
Requirement already satisfied: networkx in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (2.8.4)
Requirement already satisfied: jinja2 in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (2.11.3)
Requirement already satisfied: filelock in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (3.6.0)
Requirement already satisfied: MarkupSafe>=0.23 in c:\programdata\anaconda3\lib\site-packages (from jinja2->torch==2.0.0->xformers) (2.0.1)
Requirement already satisfied: mpmath>=0.19 in c:\programdata\anaconda3\lib\site-packages (from sympy->torch==2.0.0->xformers) (1.2.1)
Requirement already satisfied: mypy-extensions>=0.3.0 in c:\programdata\anaconda3\lib\site-packages (from typing-inspect->pyre-extensions==0.0.23->xformers) (0.4.3)
(base) E:\dino-v2>conda env create -f conda.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- xformers::xformers=0.0.18
(base) E:\dino-v2>pip install --pre -U xformers
Requirement already satisfied: xformers in c:\users\atc\appdata\roaming\python\python39\site-packages (0.0.18)
Collecting xformers
Downloading xformers-0.0.19.dev516-cp39-cp39-win_amd64.whl (112.2 MB)
---------------------------------------- 112.2/112.2 MB 767.9 kB/s eta 0:00:00
Collecting pyre-extensions==0.0.29
Downloading pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Requirement already satisfied: torch==2.0.0 in c:\users\atc\appdata\roaming\python\python39\site-packages (from xformers) (2.0.0)
Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from xformers) (1.21.5)
Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from pyre-extensions==0.0.29->xformers) (4.3.0)
Requirement already satisfied: typing-inspect in c:\users\atc\appdata\roaming\python\python39\site-packages (from pyre-extensions==0.0.29->xformers) (0.8.0)
Requirement already satisfied: filelock in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (3.6.0)
Requirement already satisfied: networkx in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (2.8.4)
Requirement already satisfied: jinja2 in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (2.11.3)
Requirement already satisfied: sympy in c:\programdata\anaconda3\lib\site-packages (from torch==2.0.0->xformers) (1.10.1)
Requirement already satisfied: MarkupSafe>=0.23 in c:\programdata\anaconda3\lib\site-packages (from jinja2->torch==2.0.0->xformers) (2.0.1)
Requirement already satisfied: mpmath>=0.19 in c:\programdata\anaconda3\lib\site-packages (from sympy->torch==2.0.0->xformers) (1.2.1)
Requirement already satisfied: mypy-extensions>=0.3.0 in c:\programdata\anaconda3\lib\site-packages (from typing-inspect->pyre-extensions==0.0.29->xformers) (0.4.3)
Installing collected packages: pyre-extensions, xformers
Attempting uninstall: pyre-extensions
Found existing installation: pyre-extensions 0.0.23
Uninstalling pyre-extensions-0.0.23:
Successfully uninstalled pyre-extensions-0.0.23
Attempting uninstall: xformers
Found existing installation: xformers 0.0.18
Uninstalling xformers-0.0.18:
Successfully uninstalled xformers-0.0.18
Successfully installed pyre-extensions-0.0.29 xformers-0.0.19.dev516
(base) E:\dino-v2>conda env create -f conda.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- xformers::xformers=0.0.18
I am getting this error:
Operator wasn't built - see python -m xformers.info
for more info
triton is not available
smallkF
is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 64
I tried to build xformers from source and then tried to install triton as well, its not working, does anyone faced this issue. I'm on Windows btw
Hi,
I'm trying to install Dinov2 from poetry.
Currently the version checking fails because torchvision is at 0.15.1 and you put 0.15.0 as a dependency.
Could you please update the dependencies to that it works with future versions?
Thank you
poetry add git+https://github.com/facebookresearch/dinov2.git
Updating dependencies
Resolving dependencies... (0.0s)
Because dinov2 (0.0.1) @ git+https://github.com/facebookresearch/dinov2.git@HEAD depends on torchvision (0.15.0)
and myproject depends on torchvision (^0.15.1), dinov2 is forbidden
Hi, amazing work you've done here!
I would like to use this as a backbone in a cnn-based Autoencoder.
I have no experience with transformers (.. except from interfering with them in chatgpt), so i would be pleased by any advice on how to replace my current backbone (resnet-50v2, first 3 layers) with dinov2!
When I pass 1024px size and get intermediate image features from Swin Transform I get in return image feature of sizes:
torch.Size([1, 128, 256, 256])
torch.Size([1, 128, 256, 256])
torch.Size([1, 256, 128, 128])
torch.Size([1, 512, 64, 64])
torch.Size([1, 1024, 32, 32])
How do I get something like this from dinov2?
Hi,
I was wondering if there is an easy way to incorporate a sequence of frames while being able to use the pretrained weights (i.e., w/o retraining the network). I do not aim to have aggregated embeddings of multiple frames by passing each frame independently. I am looking to have native video-level inputs.
Thanks in advance!
Thanks for sharing your work! I am trying to incorporate your encoder into a depth estimation task, but I havenβt found any relevant code.
The paper said,
.DPT: we use the DPT decoder (Ranftl et al., 2021) on top of our frozen models and setup a regression task. We scale the size of the head following the dimension of the features for each architecture. We show results for all baselines, all datasets and all setups in Table 11
Can you share a simple example to guide me on how to use your encoder for depth estimation? Thanks again.
1.) vision_transformer forward
does self.head(ret["x_norm_clstoken"])
in eval, where ret
is the result of forward_features
dinov2/dinov2/models/vision_transformer.py
Line 290 in ea5276e
2.) the value of forward_features
can be the result of forward_features_list
dinov2/dinov2/models/vision_transformer.py
Line 223 in ea5276e
3.) forward_features_list
returns a list of dictionaries
dinov2/dinov2/models/vision_transformer.py
Line 211 in ea5276e
So, ret["x_norm_clstoken"]
cannot work since ret
is a list of outputs, not an individual output.
Correct me if I was wrong, seems the paper did not mention a performance comparison with other supervised CNN models?
Thanks so much for this inspiring and excellent work!
I implemented the patch matching but did not perform as well as the demo on the paper. Could you introduce more details of patch matching or release the code?
Currently, on the base models have been released, and an imagenet layer. Are there plans to release the other heads that were mentioned in the paper? (eg. depth estimation, segmentation, etc).
This is the function in the official demo. I want to write a piece of python code, then execute a script command, and pass in parameters such as the image address to achieve the same function. How to do it?
Thanks for the outstanding work. Do you have the plan to release the LVD-142M dataset or codes for data processing?
Sorry, I installed -- extra index URL https://pypi.nvidia.com
CUML CU11 generated this error
pip install --extra-index-url https://pypi.nvidia.com cuml_cu11
Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com
Collecting cuml_cu11
Using cached cuml_cu11-23.4.0.1681368248.tar.gz (6.8 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [16 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "C:\Users\Lenovo\AppData\Local\Temp\pip-install-lp6867xq\cuml-cu11_29831863f455450faf83190fcc7a7e3c\setup.py", line 137, in
raise RuntimeError(open("ERROR.txt", "r").read())
RuntimeError:
###########################################################################################
The package you are trying to install is only a placeholder project on PyPI.org repository.
This package is hosted on NVIDIA Python Package Index.
This package can be installed as:
```
$ pip install --extra-index-url https://pypi.nvidia.com cuml_cu11
```
###########################################################################################
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
What is the problem?
I was having trouble running dinov2 after running
I would appreciate it if you could let me know if there is a solution
(dinov2) abe@ganesa:~/kuma-ssl/dinov2$ python dinov2/run/train/train.py --nodes 1 --config-file dinov2/configs/train/vitl16_short.yaml
Traceback (most recent call last):
File "/home/abe/kuma-ssl/dinov2/dinov2/run/train/train.py", line 11, in <module>
from dinov2.logging import setup_logging
ModuleNotFoundError: No module named 'dinov2'
(dinov2) abe@ganesa:~/kuma-ssl/dinov2$ ls
CODE_OF_CONDUCT.md LICENSE README.md demo.py hubconf.py requirements-dev.txt scripts setup.py
CONTRIBUTING.md MODEL_CARD.md conda.yaml dinov2 pyproject.toml requirements.txt setup.cfg
(dinov2) abe@ganesa:~/kuma-ssl/dinov2$ cd dinov2/
(dinov2) abe@ganesa:~/kuma-ssl/dinov2/dinov2$ ls
__init__.py __pycache__ configs data distributed eval fsdp layers logging loss models run train utils
Thanks a lot for your efforts to open source the code and checkpoint!!!
Recently, another team from Meta also releases a strong semantic segmentation dataset/model :0 - SegmentAnything.
I think a dinov2 ViT backbone finetuned on SegmentAnything would be of great help for the CV community since it combines
best self-supervised and supervised model in the world :). Do you guys have any plan to try it and release the backbone weights? For starter, ViT-Base/Large would be especially accessible for individual/academic researchers to use.
Thanks again for your time!
Hello,
I'm having an issue with Dinov2 while trying to use it with high-resolution images like the one available at this link. The problem is that the features returned by the model contain NaN values. This issue occurs with all four available models and is consistently present for images around the same size.
I would like to know if you have any ideas about what could be causing this problem. Here's an minimal example:
import torch
import numpy as np
import torchvision.transforms as T
from PIL import Image
import hubconf
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dino = hubconf.dinov2_vits14().to(device) # Same issue with larger model
img = Image.open('4k.png')
pw, ph = np.array(img.size) // 14
transform = T.Compose([
T.Resize((14 * ph, 14 * pw), interpolation=T.InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
tensor = transform(img)[:3].unsqueeze(0).to(device)
with torch.no_grad():
features = dino.forward_features(tensor)['x_norm_patchtokens'][0]
print(features) # NaN
Hi there!
Is there any documentation or guide on how to use this model as a feature extractor?
I'd like to feed it some images and receive features of those images.
Thanks.
I would like to ask you if there is possibility to modify the code to feed 3D images? if not, do you have a plan to extend the code to 3D images?
Thanks,
Nima
Can you provide the guideline about how to finetune the model to do the video action recognition task?
Thank you!!
Hi, I tried to reproduce the evaluation numbers in Table 4 and Table 6 of the paper.
I downloaded the backbones and linear classifiers from the readme and composed the classifier like this:
class Dino(nn.Module):
def __init__(self, type="dinov2_vits14", pretrained=False):
super().__init__()
# get feature model
model = torch.hub.load(
"facebookresearch/dinov2", type, pretrained=pretrained
).cuda()
autocast_ctx = partial(
torch.cuda.amp.autocast, enabled=True, dtype=torch.float16
)
self.feature_model = ModelWithIntermediateLayers(
model, n_last_blocks=1, autocast_ctx=autocast_ctx
).cuda()
sample_input = torch.randn(1, 3, 224, 224).cuda()
sample_output = self.feature_model(sample_input)
# get linear readout
out_dim = create_linear_input(
sample_output, use_n_blocks=1, use_avgpool=True
).shape[1]
self.classifier = LinearClassifier(
out_dim, use_n_blocks=1, use_avgpool=True
).cuda()
if pretrained:
vits_linear = torch.load(f"/pretrained_checkpoints/{type}_linear_head.pth")
self.classifier.linear.load_state_dict(vits_linear)
def forward(self, x):
x = self.feature_model(x)
x = self.classifier(x)
return x
Unfortunately, I did not get the same results that you report in the paper.
The results I get for ViT-g/14 (ViT-S/14) are:
Val: 85.79 (80.44)
Real: 89.23 (86.14)
v2: 77.33 (69.76)
Inet-C: 23.38 (55.38)
Inet-A: 76.81 (33.85)
Inet-R: 79.68 (53.04)
Inet-Sketch: 62.47 (39.62)
Maybe you can give me a hint as to where I'm doing something wrong?
Thank you for sharing Figure 1 from the paper, which showcases the mapping of features to RGB channels using PCA. I found it to be really impressive! I was wondering if I could ask a question about the details of the PCA process. Specifically, I was curious to know if the features were normalized, scaled, or translated before applying PCA. If they were, could you kindly provide me with more information on how the normalization, scaling, or translation was carried out? For instance, I am curious about the axis along which normalization or scaling was performed, and whether the normalization or scaling factors were computed based on individual images or the entire training dataset. Thank you very much in advance for your help!
Hi there π
Will the dataset ever be shared?
Thanks a lot
Fra
Why am I unable to install cuml-cu11? I've tried various methods, including accessing https://pypi.nvidia.com/ directly, but I've been denied access due to lack of permissions.
If you can share the code for visualizing, it must be a big&nice contribution
Hi,
In Table 9: Evaluation of frozen features on instance-level recognition. of the table, it shows the performance for OpenCLIP-G/14 is 50.7 for Oxford-M and 19.7 for Oxford-H. However, we only get 39.4 for Oxford-M and 11.7 for Oxford-H (even without 1M distractors) using the evaluation code https://github.com/filipradenovic/revisitop/blob/master/python/evaluate.py#L39
Also tried revisit-oxford (without 1M distractors) the Dinov2-B14 distilled backbone with make_classification_eval_transform()
transform in this repo, the metrics I get is 0.58 for Oxford-M and 0.337 for Oxford-H, which seems much lower than the number reported in the paper 0.729 for Oxford-M and 0.495 for Oxford-H.
If possible, could you help clarify:
Similar for met, we also cannot reproduce the eval metrics for both OpenCLIP-G/14 and Dinov2-B14.
It will be great if you could provide the code to run on eval sets or the embedding generated!
Thanks!
How to evaluate the model on image retrieval datasets such as Oxford-H? Thanks a lot!
Your work is so awesome. I am migrating vit-small to my task as a feature extractor, is this the right way to load pretrained-weight below. But it did not succeed in my task. But Dino worked well.
`# dinov2 model
model = vit_small(img_size = 224,block_chunks=0,mask = None,drop_path_rate=0.1,patch_size=14)
static_dict = torch.hub.load_state_dict_from_url('https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth')
interpolate_pos_embed(model,static_dict) # position adaption
msg = model.load_state_dict(static_dict,strict=False)
print('miss keys: ',msg.missing_keys)
print('unpxpected keys: ',msg.unexpected_keys)`
I want to use this as a general purpose backbone for multiple downstream taskw, dors it already perform well on OCR and text recognition tasks?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.