Giter Club home page Giter Club logo

amass's Introduction

AMASS: Archive of Motion Capture as Surface Shapes

alt text

AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning.

Here we provide tools and tutorials to use AMASS in your research projects. More specifically:

  • Following the recommended splits of data by AMASS, we provide three non-overlapping train/validation/test splits.
  • AMASS uses an extended version of SMPL+H with DMPLs. Here we show how to load different components and visualize a body model with AMASS data.
  • AMASS is also compatible with SMPL and SMPL-X body models. We show how to use the body data from AMASS to animate these models.

Table of Contents

Installation

Requirements

Clone this repo and run the following from the root folder:

python install -r requirements.txt
python setup.py develop

Body Models

AMASS uses MoSh++ pipeline to fit SMPL+H body model to human optical marker based motion capture (mocap) data. In the paper we use SMPL+H with extended shape space, i.e. 16 betas, and 8 DMPLs. Please download models and place them them in body_models folder of this repository after you obtained the code from GitHub.

Tutorials

We release tools and Jupyter notebooks to demonstrate how to use AMASS to animate SMPL+H body model.

Furthermore, as promised in the supplementary material of the paper, we release code to produce synthetic mocap using DFaust registrations.

Please refer to tutorials for further details.

Citation

Please cite the following paper if you use this code directly or indirectly in your research/projects:

@inproceedings{AMASS:2019,
  title={AMASS: Archive of Motion Capture as Surface Shapes},
  author={Mahmood, Naureen and Ghorbani, Nima and F. Troje, Nikolaus and Pons-Moll, Gerard and Black, Michael J.},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2019},
  month = {Oct},
  url = {https://amass.is.tue.mpg.de},
  month_numeric = {10}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the AMASS dataset, and software, (the "Model & Software"). By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Contact

The code in this repository is developed by Nima Ghorbani.

If you have any questions you can contact us at [email protected].

For commercial licensing, please contact [email protected]

To find out about the latest developments stay tuned to AMASS twitter.

Contribute to AMASS

The research community needs more human motion data. If you have interesting marker based motion capture data, and you are willing to share it for research purposes, then we will label and clean your mocap and MoSh it for you and add it to the AMASS dataset, naturally citing you as the original owner of the marker data. For this purposes feel free to contact [email protected].

amass's People

Contributors

nghorbani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amass's Issues

Getting parameters for SMPL body model

I went through the examples in the repo but I think I am currently missing on how to extract the SMPL pose parameters from the given SMPL-H parameters in the provided npz archives.

It says in the paper that AMASS uses 52 joints, where 22 joints are for the body and 30 joints belong to the hands. On the other hand, SMPL has 24 joints (including the root orientation), which is corroborated by the Figure 3 in the AMASS paper.

So, I am not sure on how to close this gap. I assumed the missing 2 joints are the ones for hands. Should I take the first 22x3 body parameters from the 'poses' dictionary entry and somehow append 2x3 parameters from the last 30 hand joints? I'd be happy if anyone can shed some more light on this. Thanks!

Type Error

HI SIR,
When I am trying to run the tutorial , I'm getting an error stating that
TypeError: init() missing 1 required positional argument: 'model_type'
Help me to rectify this error

AssertionError when trying to run 02-AMASS_DNN - Jupyter Notebook

AssertionError Traceback (most recent call last)
in
1 from amass.prepare_data import prepare_amass
----> 2 prepare_amass(amass_splits, amass_dir, work_dir, logger=logger)

~/.local/lib/python3.6/site-packages/amass/prepare_data.py in prepare_amass(amass_splits, amass_dir, work_dir, logger)
156 outpath = makepath(os.path.join(stageI_outdir, split_name, 'pose.pt'), isfile=True)
157 if os.path.exists(outpath): continue
--> 158 dump_amass2pytroch(datasets, amass_dir, outpath, logger=logger)
159
160 logger('Stage II: augment the data and save into h5 files to be used in a cross framework scenario.')

~/.local/lib/python3.6/site-packages/amass/prepare_data.py in dump_amass2pytroch(datasets, amass_dir, out_posepath, logger, rnd_seed, keep_rate)
97 data_gender.extend([gdr2num[str(cdata['gender'].astype(np.str))] for _ in cdata_ids])
98
---> 99 assert len(data_pose) != 0
100
101 torch.save(torch.tensor(np.asarray(data_pose, np.float32)), out_posepath)

AssertionError:

root_orientation for KIT dataset

Hello, it is mentioned in #2 that ["poses"][:, :3] gives the orientation of the root joint in the global frame. However, for KIT/425/walking_slow09_poses.npz (a basic walking forward without rotating) these entries correspond to the following plot.

image

I'm struggling to make sense of these root_orient entries as there is ~15deg rotational range in all 3 axes although the render depicts minimal rotation. Moreover, I can't make sense of nonzero pitch and yaw.

Is this an error with the KIT dataset or I am following a flawed method to extract roll,pitch,yaw of the root joint? If so could you please point out the correct way to extract rpy, thanks!

About some pose jitter

Hello nghorbani! Thanks for your great job!

When visualizing some sequence poses, there exist some obvious flicker. For example, in “DFaust_67/50002/50002_one_leg_jump_poses.npz”, the 329th frame and the 330th frame changed rapidly. Is this is normal?

about global rotation

Thanks for your data, but the data has a global rotation to what the video shows. How can I get the first 3 pose parameters matching the rotated body,whats the rotation matrix?

Corresponding RGB images

Thanks for the great works!

I am now trying to relates the 3D surface shape data to the video data of the original dataset. It seems hard to distinguish the correspondences from the file name of AMASS. I wonder is there any documentations?

Are the markers in the correct position?

I visualize the markers on SMPL-X addon in Blender.
The markers are not symmetric. Some markers are in the wired positions.
image
image
image
I have one more question. Why the joints in SMPL model are not symmetric?

Are all AMASS data with hand pose parameters?

AMASS uses an extended version of SMPL+H with DMPLs. Here we show how to load different components and visualize a body model with AMASS data.

Are all AMASS data with hand pose parameters?

Frame rate inconsistency on SMPL and SMPL-X format data

Hi, @nghorbani

Thanks for your AMASS dataset for the community. I downloaded the SMPL-H G and SMPL-X G from the official website. When I checked the SMPL data and the SMPL-X data, I found that the frame rates of both formats are not consistent. I list some cases.

In the case TCD_handMocap/ExperimentDatabase/FingerTP_poses.npz, the frame rate is 150, and the frame rate in TCD_handMocap/ExperimentDatabase/FingerTP_stageii.npz is 120.

In the case TotalCapture/s1/walking1_poses.npz, the frame rate is 60, and the frame rate in TotalCapture/s1/walking1_stageii.npz is 120.

I found about 5,000 inconsistencies in the data.

fail to download dataset from your link https://amass.is.tue.mpg.de/download.php

thx for your nice work. But seems like your download link or server is broken, and i can't connect to the server with your link.

I try to wget your link, and here is the detailed error log:
wget https://download.is.tue.mpg.de/download.php\?domain\=amass\&resume\=1\&sfile\=amass_per_dataset/smplx/gender_specific/mosh_results/ACCAD.tar.bz2

--2023-12-15 15:15:54-- https://download.is.tue.mpg.de/download.php?domain=amass&resume=1&sfile=amass_per_dataset/smplx/gender_specific/mosh_results/ACCAD.tar.bz2
Connecting to 127.0.0.1:7890... connected.
Unable to establish SSL connection.

MPI Limits on notebook 02-AMASS_DNN

Hi, thanks for providing this repository!
I can't find the MPI_Limits dataset on the AMASS dataset page, do you know if it goes by another name there or has been pulled off?

Specifically, it is called in the example notebook, 02-AMASS_DNN on code block 5:
amass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] } amass_splits['train'] = list(set(amass_splits['train']).difference(set(amass_splits['test'] + amass_splits['vald'])))

AMASS Coordinates

Hi,

How are the coordinate axes of the dataset/bdata['poses'] oriented? For example, does AMASS use a right-handed, Y-up coordinate system?

Thank you,
Fabian

How to use DMPL vectors

Hi,
Thanks for the amazing dataset + easy to use library. When I animate bodies with AMASS motions, I use the same betas along all motions, which results in a "rigid" motion of the body. I understand this is due to the fact I'm not using the DMPL vectors (which are added to betas for dynamics). However I can't find code snippets where you use those vectors, do you have an idea how to incorporate it along the betas? betas have size of 16, dmpl have size 8, so I have no idea how to use them.

TotalCapture: trajectory in original VS trajectory in AMASS

Hi there,

please help me understand which "scale factor" or something else am I missing. Here is my experiment:

  1. load the ./amass/TotalCapture/s1/acting1_poses.npz

  2. iterate trough the first 250 frames

    • put the SMPL model at the global location and orientation in the configuration specified by the current frame.
    • take the 3D coordinates of SMPL's 1962 vertex V_{amass}^{1962} (approx. on the left wrist) multiply them with the transformation matrix T_{total2amass} and save them into into V_{total}^{1962}
      • V_{total}^{1962} = T_{total2amass} V_{amass}^{1962}
      • T_{total2amass} = \begin{bmatrix} 0.0 & 1.0 & 0.0 & 0.0\ 1.0 & 0.0 & 0.0 & 0.0\ 0.0 & 0.0 & -1.0 & 0.0\ 0.0 & 0.0 & 0.0 & 1.0 \end{bmatrix}
    • collect all V_{total}^{1962} into left_wrist_amass_trj
  3. load the acting1_BlenderZXY_YmZ.bvh ground truth data from the TotalCapture

    • while loading, convert the values from centimeters into meters
  4. iterate trough the first 250 frames

    • save the 3D coordinates of the LeftHand joint into left_wrist_total_trj.
  5. plot left_wrist_amass_trj with blue and left_wrist_total_trj with orange in the coordinates systems of the total capture and obtain this plot:
    Screenshot from 2019-12-16 21-32-51

The two trajectories describe the first 250 points trough which the left wrist passes relative to the global coordinates system of total capture. They seem to have the same curvature. The orange trajectory (loaded from the *.bvh file) seems to be downscale with respect to the blue trajectory (transformed loaded from amass).

I overlooked the fact that the ground truth from the *.bvh files is in inches. After converting it into centimeters then into meters the plots of the left_wrist_amass_trj and left_wrist_total_trj look more similar, though still don't overlap.

Screenshot from 2019-12-19 11-19-38

Could you please help me figure out why they don't overlap closely? I think the offset between them is still to big.

Many thanks in advance!

About 01-AMASS_Visualization

Dear authors,

thanks for releasing this awesome code for using AMASS dataset!

I found a very small bug: in the 01-AMASS_Visualization.ipynb, the "def vis_body_joints" does not work when using the very recent body_visualizer package.

joints_mesh = points_to_spheres(joints, vc = colors['red'], radius=0.005)

should be changed into

joints_mesh = points_to_spheres(joints, point_color = colors['red'], radius=0.005)

I also find this kind of naming issue in body_visualizer happens when using vposer's(human_body_prior) ik_engine.py and ik_example_joints.py.

Thanks a lot for this repository and have a nice day :)

not able to run the notebooks

Hi, I have tried to follow the instructions to install the amass lib, but unfortunately I am not able to run the notebook.
First in the read me has said
"AMASS uses MoSh++ pipeline to fit SMPL+H body model to human optical marker based motion capture (mocap) data. In the paper we use SMPL+H with extended shape space, i.e. 16 betas, and 8 DMPLs. Please download models and place them them in body_models folder of this repository after you obtained the code from GitHub."
but in the clone of the git I do not have any body_models folder.
this is the folder structure:
.
├── LICENSE.txt
├── README.md
├── notebooks
│   ├── 01-AMASS_Visualization.ipynb
│   ├── 02-AMASS_DNN.ipynb
│   ├── 03-AMASS_Visualization_Advanced.ipynb
│   ├── 04-AMASS_DMPL.ipynb
│   ├── README.md
│   └── init.py
├── requirements.txt
├── setup.py
├── src
│   ├── init.py
│   └── amass
│   ├── init.py
│   ├── data
│   │   ├── init.py
│   │   ├── dfaust_synthetic_mocap.py
│   │   ├── prepare_data.py
│   │   └── ssm_all_marker_placements.json
│   └── tools
│   ├── init.py
│   ├── make_teaser_image.py
│   ├── notebook_tools.py
│   └── teaser.gif
└── support_data
└── github_data
├── amass_sample.npz
├── datasets_preview.png
├── dmpl_sample.npz
└── teaser.gif

SMPL-H vs SMPL Shape Parameters

Hello, I've read that the SMPL-H and SMPL shape parameters are equivalent; I've also seen that for both cases the betas come in sizes of 10 or 16. Since the betas represent a PCA space, would it be correct to grab the first 10 elements from AMASS datasets that provide 16 beta parameters when we are using a SMPL model with just 10?

using AMASS with STAR

Is AMASS compatible with STAR? If not, is there any way to convert it to make it compatible?

Cancel Z-axis rotation

Hi,

First, thanks for the amazing dataset.

I would like to create a dataset from AMASS where all the meshes face the same direction: is there a way to do this easily? I think I can do this with the root_orient attribute, but I am not really sure if it is possible.

Thanks

Meaning of global root orientation and trans

Hi there,

I would like to compute the trajectory of a vertex belonging to the SMPL model relative to the global coordinate system. More specifically I would like to double check my understanding of the first 3 values of poses entry (global root orientation) and of the trans entry. These are my questions:

  1. How is the global coordinate system oriented? Does it keep the same orientation across all data sets?
  2. What does global root orientation describe? The orientation of SMPL's root joint relative to the global coordinate system?
  3. What does trans describe? The translation of SMPL's root joint relative tot the global coordinate system?
  4. The coordinates of a vertex belonging to the SMPL model are relative to SMPL's root joint?

Many thanks in advance!

ACCAD MoCap "Frame Time" vs AMASS_ACAD "mocap_framerate"

Hi there,

I am looking at the files:

  • ./Male2_bvh/Male2_B15_WalkTurnAround.bvh [1] from ACCAD dataset and
  • ./ACCAD/Male2Walking_c3d/B15 - Walk turn around_poses.npz [2] from AMASS_ACCAD.

The Frame Time of [1] is 0.033 and the mocap_framerate of [2] is 120.0.

  1. What is the meaning of the mocap_framerate entry of [2]?
  2. How do you compute / choose the mocap_framerate entry of [2]

Thanks a lot!

kind regards,

How can I get the X_Y coordinate of the Jtr in the image ?

Dear authors,

thanks for releasing this awesome code for using AMASS dataset!

I'm struggling about how to get X_Y coordinate of the 'Jtr's in the image.The type Jtr is the mesh vertices.In the toturial you visualize them by plotting spheres.Is there any way to check their X_Y coordinate after project them on the image?

Doesn't include the shape parameters?

Well,I have downloaded the whole project file and it seems that it does not include sampled shape parameters (betas).Would it be possible for me to get betas?
Thanks!

windows ImportError: Could not find module 'EGL'

ImportError: ('Unable to load EGL library', "Could not find module 'EGL' (or one of its dependencies). Try using the full path with constructor syntax.", 'EGL', None)
raise ImportError("Unable to load EGL library", *err.args)

Usage of "root_orient" and "trans"

Hi there,

I try to use the root_orient[1] and trans[2] fields in order to position and orient the SMPL model relative to the global reference frame.

When I instantiate the SMPL model with the following code:

# pose_id goes along frames stored in "./ACCAD/Male2Walking_c3d/B17 -  Walk to hop to walk_poses.npz"

root_orient = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, :3]).to(computing_device) # controls the global root orientation
pose_body =   torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 3:66]).to(computing_device) # controls the body
pose_hand =   torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 66:]).to(computing_device) # controls the finger articulation
betas =       torch.Tensor(smpl_poses['betas'][:10][np.newaxis]).to(computing_device) # controls the body shape
dmpls =       torch.Tensor(smpl_poses['dmpls'][pose_id:pose_id+1]).to(computing_device) # controls soft tissue dynamics
root_trans =  torch.Tensor(smpl_poses['trans'][pose_id]).to(computing_device) # controls the global root orientation

smpl_in_pose_id = smpl_model(
    pose_body=pose_body, pose_hand=pose_hand,
    betas=betas, dmpls=dmpls,
    root_trans=root_trans, root_orient=root_orient)

and plot the trajectories of SMPL vertices with indexes 412 (head) and 3021 (pelvis) then I get this plot:
title_

When I instantiate the SMPL model with the following code:

# pose_id goes along frames stored in "./ACCAD/Male2Walking_c3d/B17 -  Walk to hop to walk_poses.npz"

root_orient = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, :3]).to(computing_device) # controls the global root orientation
pose_body =   torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 3:66]).to(computing_device) # controls the body
pose_hand =   torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 66:]).to(computing_device) # controls the finger articulation
betas =       torch.Tensor(smpl_poses['betas'][:10][np.newaxis]).to(computing_device) # controls the body shape
dmpls =       torch.Tensor(smpl_poses['dmpls'][pose_id:pose_id+1]).to(computing_device) # controls soft tissue dynamics
root_trans =  torch.Tensor(smpl_poses['trans'][pose_id]).to(computing_device) # controls the global root orientation

smpl_in_pose_id = smpl_model(
    pose_body=pose_body, betas=betas)

then manually assemble the transformation matrix from root_trans vector and root_orient rotation matrix, apply it to the same SMPL vertices with indexes 412 (head) and 3021 (pelvis) and plot their trajectories then I get this plot:
Screenshot from 2020-02-05 18-57-46

Could you please help me to correctly use the values stored in:

  • smpl_poses['poses'][pose_id:pose_id+1, :3] and
  • smpl_poses['trans'][pose_id] ?

Thank you very much!

kind regards,

[1] root_orient
[2] trans

Are there real hand parameters in AMASS dataset?

I have downloaded the smpl-x body data from amass website. However, I have found that all the left and right hands poses are equal, which is the mean pose of mano model. I have also check the BMLrub CMU and ACCAD of smpl-h body data, the hands poses are same. I want to know which sequences in the data set have real hand pose parameters. Thank you very much!

Camera movements with the rendering tool

Hi,
I am trying to visualize an amass example using its body_pose, root_orient and translation parameters. It looks like the video that I get from the rendering tool has some camera movements. It is not just the body that is moving in the space, but also the camera. I feed all these three parameters as items in the body_params dictionary.
Is that normal to see camera movements here?
`

def render_smpl_params(bm, body_parms, trans=None, rot_body=None, bg_color='white', body_color='neutral'):
  '''
  :param bm: pytorch body model with batch_size 1
  :param pose_body: Nx21x3
  :param trans: Nx3
  :param betas: Nxnum_betas
  :return: N x 400 x 400 x 3
  '''

  imw, imh = 400, 400
  base_trans = [0, 0.5, 3.0]

  mv = MeshViewer(width=imw, height=imh, use_offscreen=True)
  mv.set_cam_trans(base_trans)
  mv.set_background_color(color=bg_color)
  faces = c2c(bm.f)

  v = c2c(bm(**body_parms).v)
  T, num_verts = v.shape[:-1]

  images = []
  for fIdx in range(T):
      verts = v[fIdx]
      if rot_body is not None:
          verts = rotateXYZ(verts, rot_body)
      color_type = body_color

      color = np.ones_like(verts) * np.array(colors[color_type])[None, :]

      mesh = trimesh.base.Trimesh(verts, faces, vertex_colors=num_verts*colors['grey'])
      mv.set_meshes([mesh], 'static')

      rendered = mv.render()
      images.append(rendered)
return np.array(images).reshape(T, imw, imh, 3)`

Missing subjects in the CMU Mocap dataset

I read your paper and in your paper, you said in the CMU MoCap dataset you collected motion from 96 subjects. However, in the original CMU MoCap there are 104 subjects. Any reason you decided not to put these subjects in your dataset?

Thanks.

Dataset generation

Hello,
I have followed the AMASS_DNN tutorial.
I have downloaded all npz files from your website and tried both splits you suggested:
amass_splits = { 'vald': ['SFU',], 'test': ['SSM_synced'], 'train': ['MPI_Limits'] }
vs
amass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] }

However, for both splits, I receive the same size as the dataset with only 1182 samples for training, 854 for validation, and 56 for testing. I was under the impression that I could generate a much large dataset of 3D point clouds. Am I missing anything?

convert skeleton to AMASS

Hi, how can I convert motions using a skeleton (e.g. recorded with Kinect) to AMASS. If I understand correct, one can use MoSh to convert from MoCap markers but can it also be used for skeletal representations? Thanks in advance :)

Hand joints number of SMPL-H

I would like to ask that why is the joints rendered under SMPL-H mode (pose_body + pose_hand) has number of 52, which consist of body 20 and 16 for each hand, mean while the hand of MANO should be 21. I also notice the 5 missing joints are the end of finger. Is there any misunderstanding? thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.