Giter Club home page Giter Club logo

meshtalk's Introduction

meshtalk

This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite

@inproceedings{richard2021meshtalk,
    author    = {Richard, Alexander and Zollh\"ofer, Michael and Wen, Yandong and de la Torre, Fernando and Sheikh, Yaser},
    title     = {MeshTalk: 3D Face Animation From Speech Using Cross-Modality Disentanglement},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {1173-1182}
}

Supplemental Material

Watch the video

Running MeshTalk

Dependencies

ffmpeg
numpy
torch         (tested with v1.10.0)
pytorch3d     (tested with v0.4.0)
torchaudio    (tested with v0.10.0)

Animating a Face Mesh from Audio

Download the pretrained models and unzip them. Make sure your python path contains the root directory (export PYTHONPATH=<your_meshtalk_root_directory>).

Then, run

python animate_face.py --model_dir <your_pretrained_model_dir> --audio_file <your_speech_snippet.wav> --output <your_output_file.mp4>

See a description of command line arguments via python animate_face.py --help. We provide a neutral face template mesh in assets/face_template.obj. Note that the rendered results look slightly different than in the paper and supplemental video because we use a differnt (open source) rendering engine in this repository.

Training your own MeshTalk version

All training code is available in the training directory. The training follows a two-step recipe. First, learn the latent expression code by running train_step1.py, second learn the autoregressive model by running train_step2.py.

Note that we only provide a dataloader that produces dummy data which is always zero. For your training data, implement your own data reader that produces the same assets (template mesh, mesh sequence, audio sequence) as the dummy data reader.

One potential dataset to use for MeshTalk is the Multiface dataset which contains a subset (13 subjects) of the data used in the paper. The dataset includes tracked meshes and audio files.

Note that the geometries in multiface have a slightly different topology than in meshtalk. To convert geometries from multiface to meshtalk, run

python utils/multiface2meshtalk.py <multiface_mesh.bin> <output.obj>

on the .bin files containing the vertex positions of the multiface meshes. Note that the input must be the .bin files from the tracked_mesh directories in multiface, not the .obj files. The output is a .obj file in the same format as assets/face_template.obj.

License

The code and dataset are released under CC-NC 4.0 International license.

meshtalk's People

Contributors

alexanderrichard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meshtalk's Issues

new obj

i have a new obj file with 6172 points from the default obj file,
Q1:what is the meaning of the file face_mean and face_std and the two txt with smoothing ? Is the middle face and the hyperbole face ?
Q2: how to make the face_mean and face_std and the smooth txt file?

Different topology from multiface dataset?

I find that the number of vertices from your given template object is different from what I downloaded from multiface dataset. Especially the details of the mouth are quite different, would you please share more information about the experiments?

Train from scratch

Hi @alexanderrichard
Thank you for sharing your valuable work.
I have 2 questions:
1- Is your used dataset similar to the VOCA dataset?
2-Can I train your model from scratch using "the code you provided" and the "VOCA dataset"?
Thank you so much.

Unable to open output mp4 file

Hi folks -- great work on this project!

I ran the following command:

python ./animate_face.py --model_dir ./pretrained_models/ --audio_file ./input_data/clever.wav --output ./output/clever.mp4

And, the input clever.wav is from here.

The animate_face.py script finishes and produces an output file. But, when I open the output clever.mp4 in VLC, it's unable to play.

Here's the output.mp4 file: https://user-images.githubusercontent.com/2020010/199366864-b86000c7-bcbf-47c0-896f-6df432f881c8.mp4

This file is able to be played in the browser, but not on VLC on my ubuntu machine. I was just wondering if there's any codec or add-on that's needed to play these files in VLC or other desktop video players.

How to train it on VOCA

Thank you for your awesome work!
I was wondering how to train it on VOCA. How to get the mask weight of upper face and lower face?

Export to fbx / csv animation

As title suggests, what would be the approach to converting the tensor geometry to a animation file instead of rendering a video of the animation?

Gumbel-Softmax trick used in training stage?

Hi, I've noticed that the MultiModelEncoder in the code outputs the log-probabilities, derived by log_softmax, instead of gumbel_softmax, which is mentioned in the paper.

In the meanwhile, the Gumbel Softmax trick has been used in the context_model, so I wonder why the trick doesn't appear in encoder part. Considering only the inference code is open-sourced, does it appear in the training code? Or is there something subtle that I ignored, please?

cpu render speed is too slow

a simple voice rendering about 1 hour stuck here:

image

no further response, am on CPU, is that so slow on cpu in pytorch3d?

Question about train/finetune context model

  • I try to finetune the context model with the multiface datasets

  • I found the original context model can close the eyes, but after finetune the eyes can only half closed.

  • The below is how i train context model. I try to train the model after your suggestion in a autoregressive way with teacher forcing. Is there anything wrong about my implementation?

    def train_context(self,audio_code:th.Tensor, expression_one_hot_label:th.Tensor):
        '''
        :param audio_code: B x T x audio_dim Tensor containing the audio embedding
               expression_one_hot_label:  B x T x heads x classes Tensor generate from encoder 
        '''
        assert audio_code.shape[0]==1
        T = audio_code.shape[1]
        teacher_forcing_label = th.zeros(1, T, self.heads, self.classes, device=audio_code.device)
        teacher_forcing_label[:,:,1:self.heads] = expression_one_hot_label[:,:,:self.heads-1]
        self._reset()
        logprob_list = []

        for t in range(T):
            start, end = max(0, t - self.receptive_field()), t+1
            context = teacher_forcing_label[:, start:end, :, :]
            audio = audio_code[:,start_end,:]
            logprob = self.forward(context, audio)['logprobs'][:,-1]
            logprob_list.append(logprob)

        logprobs = th.cat(logprob_list, dim=1).reshape(-1, self.classes)
        label = expression_one_hot_label.argmax(dim=1).reshape(-1)
        loss = th.nn.functional.nll_loss(logprobs, label)
        return loss

Audio features are different from your paper statement

Hi, I found the audio preprocessing use simple transformation in your codes (load_audio & audio_chunking).
But there are different from your statement in paper where the paper says"Our audio data is recorded at 16kHz.
For each tracked mesh, we compute the Mel spectrogram of a 600ms audio snippet starting 500ms before and ending 100ms after the respective visual frame. We extract 80-dimensional Mel spectral features every 10ms, using 1, 024 frequency bins and a window size of 800 for the underlying Fourier transform."

I didn't find any Mel spectral calculation in your code, why there are different?
Is the current version is better than Mel spectral features?

运行报错

执行animate_face.py时报错,我的python版本为3.7
Traceback (most recent call last):
File "animate_face.py", line 12, in
from utils.renderer import Renderer
File "/Users/yayunhe/project/Face/meshtalk/utils/renderer.py", line 11, in
from pytorch3d.io import load_obj
File "/Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/io/init.py", line 4, in
from .obj_io import load_obj, load_objs_as_meshes, save_obj
File "/Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/io/obj_io.py", line 16, in
from pytorch3d.renderer import TexturesAtlas, TexturesUV
File "/Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/renderer/init.py", line 3, in
from .blending import (
File "/Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/renderer/blending.py", line 9, in
from pytorch3d import _C
ImportError: dlopen(/Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so, 2): Symbol not found: __Z16THPVariable_WrapN2at6TensorE
Referenced from: /Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so
Expected in: /Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/torch/lib/libtorch_python.dylib
in /Users/yayunhe/anaconda3/envs/meshtalk/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so

以下是我软件的版本信息:Package Version


certifi 2021.10.8
ffmpeg 1.4
fvcore 0.1.5.post20211023
iopath 0.1.9
numpy 1.21.4
Pillow 8.4.0
pip 21.2.2
portalocker 2.3.2
pytorch3d 0.4.0
PyYAML 6.0
setuptools 58.0.4
tabulate 0.8.9
termcolor 1.1.0
torch 1.10.0
torchaudio 0.10.0
tqdm 4.62.3
typing-extensions 3.10.0.2
wheel 0.37.0
yacs 0.1.8

Using GLB or FBX format

Thank you for such a good technology!

Frankly speaking, I have two questions connected with formats mentioned:

  1. Is it possible to have animated model on the output? Preferably, GLB or FBX, because of textures, which can be inbuilt. So I could use it in AR, VR etc.
  2. Even if not. Is it possible to input GLB, FBX (or OBJ with texture connected) at least? The reason is to get colorized output (even if MP4). Not just the blue head.

Thank you for you answer and sorry if is mentioned somewhere and I couldn't find!

Evaluation with different topology

I wonder how the comparison was made in your paper. For example, how to compare the lip error with VOCA when the output topologies are different?

how to use my obj file

IThanks for your great work. I has a question that If I want to use my obj file, how can I adjust the code.

Build pytorch3d 0.4.0 failed with torch1.10

I try to build pytorch3d 0.4.0 source with torch1.10 as same version as readme. But it always failed.
The log is below:

/home/local/gcc-5.3.0/bin/gcc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -DTHRUST_IGNORE_CUB_VERSION_CHECK -I/home/Projects/github_projects/pytorch3d/pytorch3d/csrc -I/home/software_packages/cub-1.10.0 -I/home/anaconda3/envs/torch1.10/lib/python3.7/site-packages/torch/include -I/home/anaconda3/envs/torch1.10/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/anaconda3/envs/torch1.10/lib/python3.7/site-packages/torch/include/TH -I/home/anaconda3/envs/torch1.10/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/anaconda3/envs/torch1.10/include/python3.7m -c /home/Projects/github_projects/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes_cpu.cpp -o build/temp.linux-x86_64-3.7/home/Projects/github_projects/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes_cpu.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  /home/Projects/github_projects/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes_cpu.cpp: In function ‘std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor> RasterizeMeshesNaiveCpu(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, std::tuple<int, int>, float, int, bool, bool, bool)’:
  /home/Projects/github_projects/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes_cpu.cpp:294:28: error: converting to ‘std::tuple<float, int, float, float, float, float>’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {const float&, int&, const float&, const float&, const float&, const float&}; <template-parameter-2-2> = void; _Elements = {float, int, float, float, float, float}]’
                 q[idx_top_k] = {
                              ^
  error: command '/home/local/gcc-5.3.0/bin/gcc' failed with exit status 1
  Building wheel for pytorch3d (setup.py) ... error
  ERROR: Failed building wheel for pytorch3d

Dose pytorch3d 0.4.0 really support torch1.10? I see the requirement is less than 1.7.1 in pytorch3d 0.4.0 url and less than 1.9.1 in pytorch3d main url

My environment:

  • centos 7
  • gcc 5.3.0
  • cuda 10.2
  • cub 1.10
  • python 3.7 (conda environment)
  • torch1.10
  • pytorch3d 0.4.0

Context model - how to train?

Hello,

How to train the autoregressive model for inference? In the forward function, what would be the first expression_one_hot tensor? I understand subsequent inputs would be the labels output of previous timestep.

`def forward(self, expression_one_hot: th.Tensor, audio_code: th.Tensor):

   x = self.embedding(expression_one_hot)

    for layer in self.context_layers:
        x = layer(x, audio_code)
        x = F.leaky_relu(x, 0.2)

    logits = self.logits(x)
    logprobs = F.log_softmax(logits, dim=-1)
    probs = F.softmax(logprobs, dim=-1)
    labels = th.argmax(logprobs, dim=-1)

    return {"logprobs": logprobs, "probs": probs, "labels": labels}` 

Thanks

Do you have any uv texture mapping files?

Hi. I am very impressed with your wonderful research. Thank you so much for sharing the great results. I want to render a texture to the output generated by this model. Can I get a uv texture mapping file that matches the output?

Custom .obj file

I have read through all the previous issues and understand that the topography of any custom OBJ files need to match "face_template.obj". Has anyone found a solution to this?

I notice that the vertices in "face_template.obj" are not in any specific order (to the human eye) and if there is any pattern that would be helpful to be aware of?

Would the significance of the forehead, mouth, neck, and eye masks be relevant with this task?

Any suggestions or ideas are helpful! Thank you

Trying meshtalk with obj created on my own

When I tried using the obj file I created, it raises an error that says "RuntimeError: shape '[1, 1, 6172, 3]' is invalid for input of size 54072". How can I solve this error.

How to use diffrent obj model?

Incredible work!Thanks!
I have a question on using diffrent obj model.
I tried to use obj model file created by deca, but meet a error:

(meshtalk) ubuntu@ubuntu-X640-G30:/data/cx/GANs/meshtalk$ python animate_face.py --model_dir weights/pretrained_models --audio_file test.wav --output outputs --face_template myasset/mzd.obj
/home/ubuntu/.local/lib/python3.8/site-packages/torchaudio/backend/utils.py:53: UserWarning: "sox" backend is being deprecated. The default backend will be changed to "sox_io" backend in 0.8.0 and "sox" backend will be removed in 0.9.0. Please migrate to "sox_io" backend. Please refer to pytorch/audio#903 for the detail.
warnings.warn(
load assets...
load models...
Loaded: weights/pretrained_models/vertex_unet.pkl
Loaded: weights/pretrained_models/context_model.pkl
Loaded: weights/pretrained_models/encoder.pkl
animate face mesh...
/home/ubuntu/.local/lib/python3.8/site-packages/torch/functional.py:515: UserWarning: stft will require the return_complex parameter be explicitly specified in a future PyTorch release. Use return_complex=False to preserve the current behavior or return_complex=True to return a complex output. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:653.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore
/home/ubuntu/.local/lib/python3.8/site-packages/torch/functional.py:515: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:590.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore
Traceback (most recent call last):
File "animate_face.py", line 93, in
geom = template_verts.cuda().view(1, 1, 6172, 3).expand(-1, T, -1, -1).contiguous()
RuntimeError: shape '[1, 1, 6172, 3]' is invalid for input of size 15069

What should I do if I want to animate different obj files?

Can I change the OBJ model?

If I want to change an OBJ face, what are the requirements?
Or is there a template for the face you use? Then you can create many faces through the template.
I read other issues and learned that not all OBJ can be used.
Does the number of vertices of the mesh need to be the same?
Does the face size need to be the same?

This is a cool project.

mtl and CUDA error

Does anyone know how to resolve this error message?

Screenshot 2023-05-12 at 2 07 05 PM

**Ran on mac M1 chip

why set different segment in step1 and step2

Two questions

  1. why set segment of step1 to 32 while set to 64 for step2
  2. did you do some extra work for lstm? When testing real situation, the segment often larger than 32 or 64 which makes the test data distribution shift from train data.

Issue with Character Facial Orientation and Character Network Topology

Hi,Thank you very much for your open-source project. Why is the character's orientation not facing forward in the output, and how can I make the character face forward in the generated video? Also, I'd like to ask how to modify the character model without changing the existing network topology. I apologize, as I'm not very familiar with character modeling and related techniques.

Question about failing to animate face

Hello, I have tried to use my .obj file to train meshtalk. And I have done training. But when I render the face, the output mp4 file is totally white.

I checked the the vertices loaded in the render.py , they are alright.
I also trained meshtalk just using zero tensors on the original project. And it's all right.

I was wondering that you may have some idea about failing to rendering??

Thanks.

ignore vertices

Hi, thanks for your excellent work!

How do you decide the indices of ignored vertices in multiface2meshtalk?

And I want to try train meshtalk on specific facail topolgy (like Apple neutral mesh), Do you have any suggestions?

Question about train Autoregressive Model

In your paper and code, why the model only has acces to information from the preceding categorical heads at current time step, ct,<h?
y = F.conv1d(context, self.masked_linear.weight * self.mask, bias=self.masked_linear.bias)
But I think there should be no sequential relationship between different heads in latent space at every time.

how to make eye mask and mouth mask?

In the 'modality_crossing' loss part, the eye mask and mouth mask are needed? But how can I get these mask? Could you please supply some mask annotaion?

How to calculate the mask

How to calculate the mask in the decoupled loss function?In other words how should I divide the mouth and other areas?

Training parameters

Hello,

I am trying to train MeshTalk on the VOCA dataset, however, the loss value explodes if I use a learning rate 1e-4 or higher, and keeps oscillating in the range of 0.2 if I use a lower learning rate (this does not lead to realistic results). I was wondering what training parameters were used in the paper?

I am using the following parameters:
no. of frames, T = 128
optimizer SGD with lr=9e-5 (at the moment), momentum=0.9, nesterov=True
M_upper = 5 and M_lower = 5
batch_size = 16

Thanks for any help!

I want to convert "context_model.pkl" to "context_model.onnx".

I try to run the following code to convert the .pkl file to an .onnx file:

import torch
from models.context_model import ContextModel

randinput = torch.randn(1, 126, 128, device='cuda:0')
T = randinput.shape[1]
one_hot = torch.randn(1, T, 16, 128, device='cuda:0')
model_path = "pretrained_models"
context_model = ContextModel(classes=128, heads=16, audio_dim=128)
context_model.load(model_path)
context_model.cuda().eval()
torch.onnx.export(context_model, (0, 0, one_hot, randinput), "context_model_onnx.onnx", verbose=True, opset_version=11,
                  input_names=['t', 'h', 'one_hot', 'audio'])

Where: In the "context_model.py", I replace the contents under “def forward(self, expression_one_hot: th.Tensor, audio_code: th.Tensor):” function with the contents under “def _forward_inference(self, t: int, h: int, context: th.Tensor, audio: th.Tensor):” function.

Then the error is reported as follows:

E:\leaf\meshtalk\models\context_model.py:75: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.historic_t < t:
E:\leaf\meshtalk\models\context_model.py:85: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if h > 0:
E:\leaf\meshtalk\models\context_model.py:100: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.kernel_size > 0 and self.historic_t < t:
E:\leaf\meshtalk\models\context_model.py:103: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if h.shape[-1] < self.receptive_field() - 1:

The reason I found on the Internet is that the input variable cannot be used in if statements.
How to solve this problem?

Training on VOCA/BIWI

Hi,
I am trying to train/test MeshTalk on BIWI and vocaset, but I found the training loss and validation loss keep oscillating in stage 1. And in stage 2, the ce loss decreases to 0.2 while the the val loss is a U shape around 0.5. I know it is somehow overfitting, but I test it on training/testing set, it always gives "still" results. (Kind of effect of teacher-forcing?) Can you give some advice on this? Thanks very much! And if someone has successfully train that on BIWI/vocaset, I will be very appreciated for any help~

loss

mesh faces missing for multiface

The mesh graph (.obj) multiface provided has almost 2000 faces less than the mesh by meshtalk (.obj). I wonder how to cope with it. Should I do some remeshing work to connect the isolated vertices together?

Which data was used for the pre-trained model

Hi! The paper mentions the following:

We release a subset of 16 subjects of this dataset and our model using only these subjects as a baseline to compare against

Since multiface was release with only 13 identities, can you please confirm what was used for the released pre-trained model? (e.g. the 13 identities in multiface? Those plus 3 other identities? Or another set of 16 identities?)

Thank you!

training result

Thank you for sharing your code and pretrained models. :)
I trained the MeshTalk using Multiface dataset as same as you did.
However, training graph in step2(blue, step1: orange) looks underfitting and generated faces are weird.

Screenshot from 2023-03-27 15-48-13

2183941_SEN_and_you_think_you_have_language_problems.mp4

I attached the my code in training/dataset.py
I synchronize the center of vertices frame and audio frame
image

I don't know how to modify it, Please help me v.v

asset files creation

Hi,
I ran the custom audio expressions on your neutral mesh object and it ran well. I wanted to run the audio on my own custom (model)object files. I have created the object files for my person model. For this how do I generate the asset files - face_mean.npy, face_std forehead_mask and neck mask files?
Are these files generated for the object file, or am i supposed to resize the object file to the 6172 dimension in order to use with the existing asset files?
Thank you for your help in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.