Giter Club home page Giter Club logo

learnable-triangulation-pytorch's People

Contributors

danivilanova avatar karfly avatar shrubb avatar urs-waldmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learnable-triangulation-pytorch's Issues

implementation detail of w_{c,j}

Hi,

May I ask the implementation detail of weight network to learn w_{c,j} ?

The description :"(comprised of two convolutional layers, global average pooling and three fully-connected layers)" just above formula(5),
what is the number and kernel size of conv, and the number of neurons in fc?

Thanks~
Zhe

rotation in volumetric.py

        # random rotation
        if self.training:
            theta = np.random.uniform(0.0, 2 * np.pi)
        else:
            theta = 0.0

        if self.kind == "coco":
            axis = [0, 1, 0]  # y axis
        elif self.kind == "mpii":
            axis = [0, 0, 1]  # z axis

        center = torch.from_numpy(base_point).type(torch.float).to(device)

        # rotate
        coord_volume = coord_volume - center
        coord_volume = volumetric.rotate_coord_volume(coord_volume, theta, axis)
        coord_volume = coord_volume + center

hi, as shown in above codes, why you random rotate the coord_volume ?

Training the Volumetric model

Hello,

I have pre-processed the Human3.6M dataset according to your given manual.
However, when I try to train the volumetric model, I face this problem:

Exception has occurred: AssertionError File "/media/saman/storage_device/ubuntu/learnable-triangulation/mvn/datasets/human36m.py", line 111, in __init__ assert len(self.keypoints_3d_pred) == len(self) File "/media/saman/storage_device/ubuntu/learnable-triangulation/train.py", line 65, in setup_human36m_dataloaders crop=config.dataset.train.crop if hasattr(config.dataset.train, "crop") else True, File "/media/saman/storage_device/ubuntu/learnable-triangulation/train.py", line 115, in setup_dataloaders train_dataloader, val_dataloader, train_sampler = setup_human36m_dataloaders(config, is_train, distributed_train) File "/media/saman/storage_device/ubuntu/learnable-triangulation/train.py", line 446, in main train_dataloader, val_dataloader, train_sampler = setup_dataloaders(config, distributed_train=is_distributed) File "/media/saman/storage_device/ubuntu/learnable-triangulation/train.py", line 485, in <module> main(args)

It seems thet len(self)=389938 while len(self.keypoints_3d_pred)=159181

Thank you for helping me!

Code

Its october, wanted to ask

How to get derivatives of w mentioned in algebraic triangulation approach?

According to equation 4 in your paper, it is straightforward to get the 3D coordinate vector y via differentiable Singular Value Decomposition. But When I can't figure out the derivatives of w when network training is in the backward-propagation phase. I have a lack of knowledge about matrix derivatives, and can you explain this derivation process in detail?

Is it possible to use this excellent tool without any training?

Hi, professor!
In my lab, my conductor and I cannot apply for the human3.6m dataset. Whereas we need using your tool to deploy a human pose software. Thus I wonder if we can using your wonderful tool directly on video file without any datasets?
Looking forward to your reply:)

testing on the CMU Panoptic dataset

Hi @karfly
Thanks for your sharing sincerely! I try test the model on the CMU Panoptic dataset, get into trouble like the result pic:
vol1

  • resnet152 pre 2d is ok

  • gt 3d project to 2d is ok

  • but the pre 3d is bad! all point at center of pic

how can i debug it ???

code upload time

Hi, I am very interested in your work, when will you upload your codes? thanks.

Bbox requirements - Do they need to be square?

It seems that the BBOXes used for Human3.6M are all squares.

If we are transferring to other datasets, do we need to ensure that the BBOXes are square, or can they be any size? Also, must the BBOXes for the same person remain the same size from one frame to another?

Thank you!

Questions about the batch size

Hello,

I have tried to train the volumetric model but I found the batch size stored in the configuration file(which is 5) is too large for a Titan Xp gpu(12GB). I noticed that when I set the batch size to be 2, the memory of Titan Xp got exhausted. Would you possibly tell me what type of gpus did you use? And also could you please tell me how many gpus did you use and how much is the batch size on each of them?

Thank you very much.

Regarding the volumetric triangulation, how is the heatmap projected into the Volume

Hi,
I read your paper, but I don't quite understand how the heatmaps are projected into the Volume. Do you know the exact orientation of each camera and use a transformation matrix, or how does the system project the different views into the same Volume?
Also you wrote that you tested a network trained on human3.6m on the CMU Panoptic dataset. I presume that he Camera setup in the two datasets is different. So is it possible to use a network trained on human3.6m on other data out of the box, or did you need to enter configure the System for different data?
Best regards,
David

Question for training

Dear Dr. Karim Iskakov

Thanks for ur reply for my last question. I am trying to repeat your experiment, I wonder the hardware of ur experiment, and how much time did you cost for a whole training.

I have trained ur model, but it seems I have met some troubles. After three days training by two GPUs of Tesla P100 (meanwhile, the traning process is nomal), the frequency change of the index  "Volatile GPU-Util" become very high and I do not know what's the reason, I even can not use tensorboard to watch the training process although I can wathch the traning process before.

about camera parameters for wild test

thanks for this wonderful work!

i want to predict 3d results by human_alg_model with my own data and camera parameters, but there is something wrong with the camera parameters for Camera function in [https://github.com/karfly/learnable-triangulation-pytorch/blob/master/mvn/utils/multiview.py] sef.K.dot(self.extrinsics) line. The error shows that "shapes (3,3) and (143,6,1) not aligned: 3 (dim 1) != 6 (dim 1) "

my camera parameters' shape for 3 camera is, top camera: K: (3,3) R:(143,3,1),t:(143,3,1),dist:(1,5). right camera: K: (3,3) R:(139,3,1),t:(139,3,1),dist:(1,5). left camera: K: (3,3) R:(147,3,1),t:(147,3,1),dist:(1,5).

would you please show the right camera parameters shape needed here?

Visualising the evaluation results

Is there an easy way/tool for us to visualise the evaluation results of the Human3.6M data set, much like how it is done/shown in the video?

Or would we have to parse the pickle dataset ourselves?

Thank you!

Question about joints' confidences regression and if only 2 views available

Hi @karfly , thanks for sharing your great work! The code is very clear. I'm just curious about 2 questions.
Take AlgebraicTriangulationNet as an example,

  1. in GlobalAveragePoolingHead, you regress the confidence score for each 2D joint for 3D triangulation, then implement MSE in 3D joints directly. How can the network learn the confidence score correctly in this kind of self-supervision manner?
  2. have you done some experiments under only 2 cameras, don't know if the 3D joints can be also recovered robust or not.

training stages

Hey,
In the paper, when it comes to training, "The networks are trained for 6 epochs with 10−4 learning rate for the 2D backbone and a separate learning rate 10−3 for the volumetric backbone". I want to know if the the whole network is trained end-to-end after 2 seperate training?

How to generate these diagrams in your paper and save detected results of the 3D human skeleton?

How to generate these diagrams in your paper and save detected results of the 3D human skeleton?
https://reurl.cc/Zn8RNp
https://reurl.cc/A1XE9j
https://reurl.cc/RdRK2Z
https://reurl.cc/5gX7XM

Could you please give some video streaming examples for inputs as OpenPose that can generate output files in format?
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md

Thank you.

evalution results in human3.6M?

Hi,

I saw that you use every fifth frame for evaluation in your paper.
Is it means that the MPJPE of 20.8 is evaluated on every fifth frame?

Have you evaluated your model on the full frames? Could you provide the results on the full frames?
Thanks

Creating new "ground truth" for several datasets

Hi, thanks for this amazing work.

Do you have any plans for running your method on other datasets and releasing the resulting poses? This would be very beneficial for correcting many ground truth errors. Specifically I'm thinking of MPI-INF-3DHP (some annotations are wrong) and HumanEva-I (in some sequences the head ground truth is wrong), in addition the already mentioned H3.6M (problems with S9) and CMU-Panoptic (ground truth unavailable for many sequences, e.g. dance + errors).

I think your results could be better than the original ground truth in many cases. So by e.g. leave-one-subject-out training and testing, one could generate new, polished "ground truth" for each subject of a particular dataset (to avoid memorizing the training set errors).

backbone pretraining details

I had some questions regarding the pretraining that was done previous to the end-to-end training for the algebraic triangulation. From the paper I saw that it was first trained on the COCO dataset and then finetuned on MPII+Human3.6 (assuming MPII+CMU for CMU dataset). Was the pretraining done using the softmax+softargmax combo with soft MSE? or is it similar to other pose papers that do MSE on the heatmaps?

Thanks!

How to obtain bbox file?

Hi, could you tell me how to generate bounding box file?
Or, could you provide the bounding box files you used?

How to download ground truth bounding boxes? (.mat files)

In the section 'Human3.6M preprocessing scripts':

  1. Additionally, download ground truth bounding boxes and unpack them like so: "/S1/MySegmentsMat/ground_truth_bb/Phoning 1.58860488.mat".

However, I do not know how to download ground truth bounding boxes. (.mat files)

I found all converters are using Matlab.

Thank you.

problem about obating 3D coordinates

hi,

I saw the input images of RANSACTriangulationNet, AlgebraicTriangulationNet and VolumetricTriangulationNet are cropped from origin image according to bounding box.

For RANSAC Triangulation and Algebraic Triangulation method, the coordinate of 2d keypoints should be in the original image coordinate system. The predicted 2d keypoints by 2d pose backbone model are in bounding box image coordinate system.

But I didn't see where you convert the predicted 2d keypoint into original image coordinate system.

could you help me dispel my confuse?

Thanks

Erroneous cropping for undistorted images

After I undistorted the images using undistort-h36m.py, I found the images were also cropped. But when loading the dataset in training or evaluating, the images would be cropped again, leading to erroneous inputs.

camera parameters for wild test

Thanks for this amazing work and code.
I want to use this model to evaluate my own datasets. I calibrate the camera parameters, input to the model , but the results is ridiculous. I guess that it is caused by wrong camera parameters, would you please show the human36m's raw camera parameter?

Is the metric or something wrong?

Thank you for sharing your code.

I pulled your code repository and set up the Human3.6M dataset according to your instruction.
After that, I'd like to test "Algebraic triangulation" so ran the following command:
python3 train.py --eval --eval_dataset train --config eval/human36m_alg.yaml --logdir ./logs
And then, I got the following message:

args: Namespace(config='eval/human36m_alg.yaml', eval=True, eval_dataset='train', local_rank=None, logdir='./logs', seed=42)
Number of available GPUs: 1
Loading pretrained weights from: ./data/pretrained/human36m/pose_resnet_4.5_pixels_human36m.pth
Reiniting final layer filters: module.final_layer.weight
Reiniting final layer biases: module.final_layer.bias
Successfully loaded pretrained weights for backbone
Successfully loaded pretrained weights for whole model
Loading data...
Experiment name: [email protected]:14:55
/home/akihiko/MV3Dpose/learnable-triangulation-pytorch/mvn/datasets/human36m.py:220: RuntimeWarning: invalid value encountered in true_divide
action_scores[k] = v['total_loss'] / v['frame_count']
/home/akihiko/MV3Dpose/learnable-triangulation-pytorch/mvn/datasets/human36m.py:220: RuntimeWarning: invalid value encountered in double_scalars
action_scores[k] = v['total_loss'] / v['frame_count']
Done.

It seems to work but I got a huge error.
For example, the average "per pose error" is about 1200 and it means 1200 mm, doesn't it?

Thank you.

details of experimental settings on CMU dataset

Thank you very much for sharing the great work.
I want to know the detail of the experiment setting for CMU dataset.

In your paper, it is said that 4 cameras was used for validation and I want to know the camera IDs of them.
Also, I can't find how to split videos into train/val in the paper of "Monocular total capture."

Finally, I would appreciate to know how long it takes to train.
Thank you very much.

Please provide example of input

Could you please provide example of input for model to be able to reproduce results. I requested access to Human3.6M but I think it would take a lot of time to receive approve.

Question about live Cameras

Hi, thanks for your such a nice work. I want to know that this work can be implemented for live cameras instead of images? I want to use 2 live cameras. Is it possible in your work? I read your work but still I'm confused about my problem.

CMU Panoptic style annotation

This work would be very useful for me if there were models available trained on CMU Panoptic. I would like to get CMU-like (so COCO-like) annotations for Human3.6M. I thought of just running the CMU-trained net on H36M, as the video shows very impressive results. If you cannot release the code yet, can you perhaps upload your resulting CMU-like poses for the H36M images?

I'm looking into implementing this myself, by changing stuff from H36M paths and settings to CMU in this repo, but I'm really not sure if I'm looking at a day's worth of work here or two weeks. I'm a bit reluctant to jump straight in, since if it was so easy you probably would have released it already...

How you prepare the bounding box when transfeering from CMU dataset to Human3.6m?

Amazing jobs, guys.

I am interested in the transformation part mentioned in the paper. And very curious about the bounding box you used when transferring the pre-trained CMU model to the Human3.6m dataset.
So which bbox you use when transfer: the original bbox provided by human3.6m or the modified human3.6m bbox which matches the pre-trained model(CMU dataset)?

AssertionError : len(self.keypoints_3d_pred) != len(self)

Hi,

I followed the instructions for downloading the dataset as mentioned here. But even after following these instructions, I end up with an assertion error at this line. I downloaded the validations results from the google drive link mentioned (file size 5.6M for results_val.pkl; train works fine). Is this because of some error in the validation file link?

Thanks!

'undistort-h36m.py' with 'undistort_images=True' gets the error. Follow your instructions, is there something I should download?

Follow your instructions, is there something I should download?

python3 undistort-h36m.py human36m-multiview-labels-GTbboxes.npy

'undistort-h36m.py' with 'undistort_images=True' gets the error.

python3 undistort-h36m.py /.../dataroot human36m-multiview-labels-GTbboxes.npy 20

Dataset length: 161362
Computing distorted meshgrids
0%| | 0/161362 [00:00<?, ?it/s]
Traceback (most recent call last):
File "undistort-h36m.py", line 47, in
sample = dataset[sample_idx]
File "/../../../mvn/datasets/human36m.py", line 143, in getitem
assert os.path.isfile(image_path), '%s doesn't exist' % image_path
AssertionError: /.../dataroot/S1/Directions-1/imageSequence-undistorted/54138969/img_000001.jpg doesn't exist

I think the problem occurs here:
# load image
image_path = os.path.join(
self.h36m_root, subject, action, 'imageSequence' + '-undistorted' * self.undistort_images,
camera_name, 'img_%06d.jpg' % (frame_idx+1))

Thank you.

Code release plan

Hi, I am interested at your learnable triangulation work, is there a plan about when will you release the code?
Thanks~!

Which GPU did you use during training?

Hello!
Thank you for your awesome code sharing.

Now, I try to train the network of algebraic version with following command.
python train.py --config train/human36m_alg.yaml --logdir ./logs

And I got following RuntimeError.

Traceback (most recent call last):
File "train.py", line 486, in
main(args)
File "train.py", line 465, in main
n_iters_total_train = one_epoch(model, criterion, opt, config, train_dataloader, device, epoch, n_iters_total=n_iters_total_train, is_train=True, master=master, experiment_dir=experiment_dir, writer=writer)
File "train.py", line 190, in one_epoch
keypoints_3d_pred, keypoints_2d_pred, heatmaps_pred, confidences_pred = model(images_batch, proj_matricies_batch, batch)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/akihiko/MV3Dpose/learnable-triangulation-pytorch/mvn/models/triangulation.py", line 158, in forward
heatmaps, _, alg_confidences, _ = self.backbone(images)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/akihiko/MV3Dpose/learnable-triangulation-pytorch/mvn/models/pose_resnet.py", line 301, in forward
x = self.layer3(x)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/akihiko/MV3Dpose/learnable-triangulation-pytorch/mvn/models/pose_resnet.py", line 79, in forward
out = self.bn1(out)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 76, in forward
exponential_average_factor, self.eps)
File "/home/akihiko/anaconda3/envs/LT_MV3Dpose/lib/python3.6/site-packages/torch/nn/functional.py", line 1623, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 18.00 MiB (GPU 0; 10.92 GiB total capacity; 9.54 GiB already allocated; 9.12 MiB free; 39.55 MiB cached)

I'm using GTX1080Ti, which has 11GB VRAM, but it seems that the CUDA is out of memory.
So I'd like to ask you which gpu did you use during training?

undistort-h36m

Hello,

I have done all the steps to preprocess the Human3.6M dataset. However, when I run the script undistort-h36m, I face this problem:

Dataset length: 527599 Computing distorted meshgrids 0%| | 0/527599 [00:00<?, ?it/s] Traceback (most recent call last): File "undistort-h36m.py", line 47, in <module> sample = dataset[sample_idx] File "/media/saman/storage_device/ubuntu/learnable-triangulation/mvn/datasets/human36m_preprocessing/../../../mvn/datasets/human36m.py", line 182, in __getitem__ if self.keypoints_3d_pred is not None: AttributeError: 'Human36MMultiViewDataset' object has no attribute 'keypoints_3d_pred'

Testing in the wild

Hello thanks for sharing your awesome work. Could you advice if your code can be adopted to testing on images in the wild and whether at inference it actually requires any information about world position of cameras or camera parameters?

From the first look it doesn't seem so but I want to try out trained models on some of my own images but wanted to make sure I prepare the data correctly

Thanks!

demo code and joint order

Hello!
Thanks for your excellent work!
I am using the algebraic model to make predictions on some human36m datas. Actually I use two videos and the corresponding two cameras' parameters , but the results seem not correct(I prepare data and do preprocessing according to the code in human36m.py). 
I wonder whether you will provide a demo code  and what is the joint order of the output of the algebraic model?

GPU Issue

I use 4 gpu as follow.
image

When I run eval as follows, an error occurs:

python3 train.py \
  --eval --eval_dataset val \
  --config ./experiments/human36m/eval/human36m_alg.yaml \
  --logdir ./logs

args: Namespace(config='./experiments/human36m/eval/human36m_alg.yaml', eval=True, eval_dataset='val', local_rank=None, logdir='./logs', seed=42)
Number of available GPUs: 4
Loading pretrained weights from: ./data/pretrained/human36m/pose_resnet_4.5_pixels_human36m.pth
Reiniting final layer filters: module.final_layer.weight
Reiniting final layer biases: module.final_layer.bias
Successfully loaded pretrained weights for backbone
Successfully loaded pretrained weights for whole model
Loading data...
Experiment name: [email protected]:51:22
Traceback (most recent call last):
File "train.py", line 485, in
main(args)
File "train.py", line 478, in main
one_epoch(model, criterion, opt, config, val_dataloader, device, 0, n_iters_total=0, is_train=False, master=master, experiment_dir=experiment_dir, writer=writer)
File "train.py", line 190, in one_epoch
keypoints_3d_pred, keypoints_2d_pred, heatmaps_pred, confidences_pred = model(images_batch, proj_matricies_batch, batch)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/mvn/models/triangulation.py", line 158, in forward
heatmaps, _, alg_confidences, _ = self.backbone(images)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/mvn/models/pose_resnet.py", line 299, in forward
x = self.layer1(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/mvn/models/pose_resnet.py", line 87, in forward
out = self.bn3(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py", line 76, in forward
exponential_average_factor, self.eps)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1623, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 3.52 GiB (GPU 0; 10.92 GiB total capacity; 5.65 GiB already allocated; 1.96 GiB free; 2.67 GiB cached)

I use gpu that memory is 12 gigabytes, and when your model tries to allocate 3.52 gigabytes, I get an error message.
How can i solve this?

released code?

hello, thanks for your such a nice work !! Do you plan to delay the code release ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.