Giter Club home page Giter Club logo

deep-head-pose's People

Contributors

counterfactualsimulation avatar natanielruiz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-head-pose's Issues

How to do batch processing?

Hi, is there a way to do a batch process, say of 10 frames, at a time?

Actually, I altered it to just process one cropped face at a time. I can now run it on CPU this way, at about 0.7 secs/face. Each face is about 100 to 200 pixels squared.

So now, I'd like to forward propagate multiple frames, or faces, at a time. Any tips would be great, thanks!

Eric

SegmentationFault in GPU

@natanielruiz Hi i ran your code on the GPU but i get
Ready to test network.
1
Segmentation fault (core dumped)

i am using the following "python code/test_on_video_dlib.py --snapshot "/home/teai/abhilashsk/hpe/snapshot/hopenet_alpha1.pkl" --face_model "/home/teai/abhilashsk/hpe/model/mmod_human_face_detector.dat" --video "/home/teai/abhilashsk/hpe/Data/1-FemaleNoGlasses-Normal.mp4" --output_string "output" --n_frames 300 --fps 20"
error and the code is not using the gpu completely , i think there is some codes changes to be made for it to work on the gpu can you pls let me know what have to be done

Is anyone getting this kind of error?

dets = cnn_face_detector(frame, 1)
RuntimeError: Error while calling cudnnSetConvolution2dDescriptor((cudnnConvolutionDescriptor_t)conv_handle, padding_y, padding_x, stride_y, stride_x, 1, 1, CUDNN_CROSS_CORRELATION) in file /tmp/pip-build-BV88Oi/dlib/dlib/dnn/cudnn_dlibapi.cpp:876. code: 3, reason: CUDNN_STATUS_BAD_PARAM

About the roll,pitch,yaw's definition

Hello,thank you for your great work.How are their positive and negative directions defined when using roll, pitch, and yaw in function" draw_axis(img, yaw, pitch, roll, tdx=None, tdy=None, size = 100)"?

is there any reason using degree rather than radian??

hi nataniel ruiz! i am a student who began studying ML and i have a question for your great work
first i want to thank you for providing your good code and insight.
my question is
in the code, you used degree instead of radian
but i think radian is easier to get regressed because it is only from -2pi to 2pi.
is there any reason??

thank you

Having runtime error when train your Hopenet

Hi natanielruiz:

Firstly, I want to say thank you for your great work! I tested your pretrained model on my own dataset and it works great. The result is accurate and robust. Then currently I would like to fine-tune your network with my own dataset, however, I found I cannot do it.

I did prepared the 300W_LP dataset and generated the filelist based on the input of your code. (By the way, maybe you can provide the filelist generation code in your repository, which will make it self-contained.)

Then, we I ran your train_hopenet.py code, sometimes I can got result for 1 or 2 epochs, however, it will always gave me the following error message:

Loading data.
Ready to train network.
Epoch [1/5], Iter [100/7653] Losses: Yaw 4.5354, Pitch 4.0671, Roll 4.2844
/opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THCUNN/ClassNLLCriterion.cu:57: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THCUNN/generic/ClassNLLCriterion.cu line=87 error=59 : device-side assert triggered
Traceback (most recent call last):
  File "/home/foo/Academy/deep-head-pose/code/train_hopenet.py", line 166, in <module>
    alpha = args.alpha
  File "/home/foo/Ordnance/anaconda2/envs/Hopenet/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/foo/Ordnance/anaconda2/envs/Hopenet/lib/python2.7/site-packages/torch/nn/modules/loss.py", line 482, in forward
    self.ignore_index)
  File "/home/foo/Ordnance/anaconda2/envs/Hopenet/lib/python2.7/site-packages/torch/nn/functional.py", line 746, in cross_entropy
    return nll_loss(log_softmax(input), target, weight, size_average, ignore_index)
  File "/home/foo/Ordnance/anaconda2/envs/Hopenet/lib/python2.7/site-packages/torch/nn/functional.py", line 672, in nll_loss
    return _functions.thnn.NLLLoss.apply(input, target, weight, size_average, ignore_index)
  File "/home/foo/Ordnance/anaconda2/envs/Hopenet/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py", line 47, in forward
    output, *ctx.additional_args)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THCUNN/generic/ClassNLLCriterion.cu:87

I did some search, and the most promising answer is the following link:
https://discuss.pytorch.org/t/runtimeerror-cuda-runtime-error-59-device-side-assert-triggered-at-opt-conda-conda-bld-pytorch-1503970438496-work-torch-lib-thc-generic-thcstorage-c-32/9669/5

It sees like in some case your output is out of the bound of the target. The following is my running environment:

Python 2.7.14 (with Anaconda)
Using conda virtual environment
pytorch 0.2.0 py27hc03bea1_4cu80 [cuda80] soumith
torchvision 0.1.9 py27hdb88a65_1 soumith

I would like to know if you meet this kind of problem before and if you can provide me some ideas about how to solving this problem? Thank you very much for your help!

FaceDetection+CustomTraining+Validation +Performance validation

@natanielruiz Hi thanks for the awesome work and sharing just had few queries

  1. For face detection should we always use the FRCNN or is there any possibility of using some other detection technique
  2. How to validate the output params like the pitch,roll and yaw values obtained from the algorithm
  3. Are the values of pitch,roll and yaw being generated based on the center of the face detection
  4. Can we use ur algorithm for training it on custom dataset or the model share can be used on generic dataset
  5. For custom training can you provide some references steps to achieve this
  6. I ran your code on the CPU and its very slow is there any provision in the future for running it on the CPU

Thanks for the awesome work and sharing

How to parse correct ypr from txt or bin file for BIWI dataset

Hi,
@natanielruiz
Sorry to bother you, but here's a problem. Could pls. take a look at it?
It won't match when parse the ypr for BIWI dataset.
Take following sample for an example:

image_path: BIWI-head/hpdb/01/frame_00501_rgb.png
annot_path: BIWI-head/hpdb/01/frame_00501_pose.txt
bin_path: BIWI-head/annot/01/frame_00501_pose.bin

from bin_path I get (as said by author, 3 translations and 3 rotations of head, but the order is not sure):

[156.60000610351562, 94.61060333251953,893.541015625,41.50346755981445,-62.06394958496094, -12.801728248596191] 

from annot_path I get:

 0.45684 -0.404949 -0.792031 
-0.103806 0.860022 -0.499587 
0.883471 0.310449 0.350856 
156.6 94.6106 893.541

Using following code you provide, I get ypr = (-52.37571099378074, -54.91993557104815, 41.55420301391979) in degree.

    with open(txt_path, 'r') as f:
        lines = f.read().strip().split()
    lines = np.array(lines).astype(np.float32).reshape(-1, 3)
    R = lines[:3, :]
    R = np.transpose(R)
    roll = -np.arctan2(R[1][0], R[0][0]) * 180 / np.pi
    yaw = -np.arctan2(-R[2][0], np.sqrt(R[2][1] ** 2 + R[2][2] ** 2)) * 180 / np.pi
    pitch = np.arctan2(R[2][1], R[2][2]) * 180 / np.pi
    return yaw, pitch, roll
  1. ypr from bin file won't match with ypr from _pose.txt file.
  2. isn't pitch a positive value when a person looks down?
    The plotted image
    image

The definition of head pose:
image

Thanks a lot.
Your reply would be greatly appreciated.
BR

*_shape.npy files

I wanted to retrain your model, have downloaded the 300W-LP dataset, but I am unable to find .npy files needed. I see that other datasets do not use them, so I tried to remove them the same way it is done for other datasets, but that causes an error. Any insight about this?

train the model on 300W_LP reports the cuda runtime error

Hi natanielruiz, thanks a lot to open source your project, I use your model to test my video, it behavies very good.
Now I want to train your model on 300W_LP:

python code/train_hopenet.py --data_dir=300W_LP --filename_list=filename_list

When the train process run several thousands images, he crash and report the following errors:

/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion t >= 0 && t < n_classes failed.

It crashs at train_hopenet.py:177, when access "yaw_predicted"

Any ideas ? thinks

AlexNet

Hello,
First, I am very impressed by the work, the use of regression loss on softmax output is very none-intuitive but neat and very effective.
I am interested in testing skinnier versions of you model.
In the paper you have stated two results on AlexNet in table 4 & 5 (alpha=1) what is the difference between them that caused such different results?
In the github you have the AlexNet model for testing. Can you also provide a pre-trained snapshot (weights) please?

Also I've notice on the issues you said that ResNet34 also worked nice. You have code and snapshot ready for that?

Thanks a lot!!

About the BIWI dataset

Hi. Thank you for the great project.
About BIWI dataset, I have some question about the face detection.
I understand that you use your Dockerface for the face detection.
I try to replace the preprocess by MTCNN for the face detection, but there will be some missing frames due to the large pose.
Using the previous frame detection location can be a solution, but I found it could lead to a great performance drop. Can you tell me how do you handle these kinds of situation when dockerface fail?
The training code does not reflect this problem. Thank you!

Image preprocess

@natanielruiz


It seems that the way of image preprocessing (mainly bounding box padding) differs in different py-files.
I wonder if there is any standard (or we can say rule) of image padding?
Thank you very much.


test_on_video_dlib.py

x_min -= 2 * bbox_width / 4
x_max += 2 * bbox_width / 4
y_min -= 3 * bbox_height / 4
y_max += bbox_height / 4

test_on_video.py && test_on_video_dockerface.py

x_min -= 50
x_max += 50
y_min -= 50
y_max += 30

Training and Testing Hopenet on custom dataset

@natanielruiz Hi natan, just having following queries on the training and testing on the custom dataset

Training:
1.For the custom dataset i should have annotation files in the format which is mentioned in the docker face?
2. In the code folder i see train_hopenet.py, train_alexnet.py and train_resnet.py so i need to use which file for my training

Testing
1.How to evaluate the value generated for the roll, yaw and pitch is the correct?
2. Can i obtain the duration of the head pose from the mentioned code

Thnaks

Test Score on AFLW2000

@natanielruiz I am trying to evaluate the following model:
https://drive.google.com/open?id=1m25PrSE7g9D2q2XJVMR6IA7RaCvWSzCR
on AFLW2000 dataset (31 images with angles outside of range -99-99 are eliminated) with test_hopenet.py and I am getting the follwoing error scores:
Test error on the 1969 test images. Yaw: 9.1709, Pitch: 7.0361, Roll: 5.7762, MAE: 7.3277
however in the papre, reported error (for alpha=1) are: Yaw: 6.920, Pitch: 6.637, Roll: 5.674, MAE: 6.410

Can you please upload the right model or give some instructions how to reproduce results? Thanks in advance.

About paper FAN

Hello. I want to know the FAN type mentioned in the paper.
Since there are four FAN pre-trained model in their github, I just want to make sure I check the right one.
https://github.com/1adrianb/2D-and-3D-face-alignment

    2D-FAN - trained on 300W-LP and finetuned on iBUG training set.

    3D-FAN - trained on 300W-LP

    2D-to-3D-FAN - trained on 300W-LP

    3D-FAN-depth - trained on 300W-LP

I suppose your FAN means 3D-FAN. Is that correct? Thank you!

invalid load key, ' '.

I'm new with pyTorch, and have trouble to load a pre-trained model.
Trying to test 'test_on_video_dlib'
python code/test_on_video_dlib.py --snapshot 'snap/hopenet_alpha1.pkl' --face_model /usr/local/lib/python2.7/dist-packages/face_recognition_models/models/mmod_human_face_detector.dat --video vid.mp4 --output_string res --n_frames 100 --fps 25

UnpicklingError Traceback (most recent call last)
in ()
51
52 # Load snapshot
---> 53 saved_state_dict = torch.load(snapshot_path)
54 model.load_state_dict(saved_state_dict)
55

/usr/local/lib/python3.5/dist-packages/torch/serialization.py in load(f, map_location, pickle_module)
356 f = open(f, 'rb')
357 try:
--> 358 return _load(f, map_location, pickle_module)
359 finally:
360 if new_fd:

/usr/local/lib/python3.5/dist-packages/torch/serialization.py in _load(f, map_location, pickle_module)
530 f.seek(0)
531
--> 532 magic_number = pickle_module.load(f)
533 if magic_number != MAGIC_NUMBER:
534 raise RuntimeError("Invalid magic number; corrupt file?")

UnpicklingError: invalid load key, '

Used python 2 and 3, every of 3 provided models and always have the same result.

I made an error when I was executing Python

File "code/test_on_video_dlib.py", line 7, in
import torch
File "/usr/local/lib/python2.7/dist-packages/torch/init.py", line 53, in
from torch._C import *
ImportError: dlopen: cannot load any more object with static TLS

When trained the model

hi, natanielruiz
i use your code to train the model:
python code/train_hopenet.py --data_dir /AFLW2000/AFLW2000 --filename_list AFLW2000/imglist.txt
but there is the bug:
image
how to modify the code? and hope you can give the lastest code.

Run command field information?

python code / test_on_video_dockerface.py --snapshot ./hopenet_robust_alpha1.pkl --video ./video/1.mp4 --bboxes FACE_BOUNDING_BOX_ANNOTATIONS --output_string new_output --n_frames 2000 --fps 20

  1. please, Parameters:FACE_BOUNDING_BOX_ANNOTATIONS How to Write?

  2. error info:
    from torch.utils.serialization import load_lua
    ImportError: No module named serialization

torch is version?

why combined loss is better?

I want to get an answer about my question. But i can't find answer from the artical.
In my opinion, the answer may be that combined loss is suitable for training models. Many comment point out that regression loss is hard for training,cls loss is easier.
So,how do you think about it?

Draw curves

Hello,can you tell me use what tools to draw Pose Estimation cumulative error distribution curves?Thank you very much.

Flip, Blur?

hello, i want to know the meaning of these codes in the datasets.py:
` # Flip?
rnd = np.random.random_sample()
if rnd < 0.5:
yaw = -yaw
roll = -roll
img = img.transpose(Image.FLIP_LEFT_RIGHT)

    # Blur?
    rnd = np.random.random_sample()
    if rnd < 0.05:
        img = img.filter(ImageFilter.BLUR)`

thx.

when I run the code test_on_video_dlib.py using hopenet_alpha1.pkl

error message:

Traceback (most recent call last):

File "/home/hhf/桌面/deep-head-pose-master/code/test_on_video_dlib.py", line 59, in
cnn_face_detector = dlib.cnn_face_detection_model_v1(args.face_model)

RuntimeError: An error occurred while trying to read the first object from the file /home/hhf/桌面/hopenet_alpha1.pkl.
ERROR: Error deserializing object of type int

I don't know what's the reason for this error.

Hi,natanielruiz. Could you tell me how to run the code correctly?

Hi,natanielruiz.
I'm a student who has just started to learn DeepLearning. I have installed the anaconda3 and pytorch and so on. I try to run your code :

python code/test_on_video_dockerface.py --snapshot PATH_OF_SNAPSHOT --video PATH_OF_VIDEO --bboxes FACE_BOUNDING_BOX_ANNOTATIONS --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO

and there is some errors:

Reloaded modules: datasets, utils, hopenet
Traceback (most recent call last):

  File "<ipython-input-9-5912a37fc872>", line 1, in <module>
    runfile('/home/deep/文档/deep-head-pose-master/code/test_on_video_dockerface.py', wdir='/home/deep/文档/deep-head-pose-master/code')

  File "/home/deep/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 710, in runfile
    execfile(filename, namespace)

  File "/home/deep/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 101, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "/home/deep/文档/deep-head-pose-master/code/test_on_video_dockerface.py", line 48, in <module>
    if not os.path.exists(args.video_path):

  File "/home/deep/anaconda3/lib/python3.6/genericpath.py", line 19, in exists
    os.stat(path)

TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

I don't know what I did is right or not. I want to achieve the effect you show in the video. Could you tell me how to run your code correctly?Thank you very much!

dlib HOG face detector - zero division error

Hi, has anyone tried using the HOG based dlib face detector with the deep head pose model?

The dlib CNN model was even slower for me. Since I'm using just a CPU, the test was like 30 secs a frame... so maybe using a HOG model would be slightly faster.

However, when I make the appropriate swap between the dlib cnn and HOG detectors, I get a zero division error when I pass the image the transform object. Note - I made sure to get the proper bounding box format output from the HOG detector, so I don't think that's the problem.

This is the error:
Traceback (most recent call last):
File "code/test_on_video_dlib.py", line 153, in
img = transformations(img)
File "/anaconda3/envs/deepheadpose-env/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 49, in call
File "/anaconda3/envs/deepheadpose-env/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 175, in call
File "/anaconda3/envs/deepheadpose-env/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/functional.py", line 203, in resize
ZeroDivisionError: division by zero

how to calculate MSE loss?

Hi,

In calculating MSE loss, a series of code is written. My question is about the third line:
yaw, pitch, roll = model(image)
...
yaw_predicted = softmax(yaw)
...
yaw_predicted = torch.sum(yaw_predicted * idx_tensor, 1) * 3 - 99 # where did this formula come from? what is the logic behind it?
...
loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont)

Thanks in advance

different loss in different task

You use MSELoss as regression loss in the train_hopenet.py, but use SmoothL1Loss as regression loss in the train_resnet50_regression.py. Can you explain the reason for that?
Thanks for your excellent work !

The loss of the network

dear author:
I am confused of the loss back pass of the network. Is it the three separate losses back pass alternately??? Hope for you reply...

train error

I tried to train the model with:
python train_hopenet.py --gpu=1 --num_epochs=30 --batch_size=16 --lr=0.00001 --dataset=Pose_300W_LP_random_ds --data_dir=../dataSet/300W_LP --filename_list=../filePath --output_string=headPose --alpha=1
with the error:
/pytorch/torch/lib/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed. CUDA error after cudaEventDestroy in future dtor: device-side assert triggeredTraceback (most recent call last): File "train_hopenet.py", line 177, in <module> loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont)

Did somebody encounter this problem?
@natanielruiz

The loss cannot get decreased during the training

Hi natanielruiz:

I was trying to repeated your paper's result in recent days however I found I cannot get the loss decreased when I trained your model on 300W_LP dataset. I used the same parameters you provided in your paper where

alpha = 1, lr = 1e-5 and default parameters for Adam Optimizer.

I ran your network for 25 epochs and the losses for Yaw is vibrating around 3000 which means the MSE loss is still too large for the yaw degree.

Do you have any idea how to debug the network or solve this issue? Thank you very much for your help!

how doese fc_finetune work

Hello,
Thanks for your great job.
Currently I'm trying to reimplement Hopenet in tensorpack. Could you pls. kindly answer following question:

  1. How does fc_finetune layer work in hopenet? Is it being ignored?
    # Vestigial layer from previous experiments
    self.fc_finetune = nn.Linear(512 * block.expansion + 3, 3)

Thanks a lot.

Different results from the paper (AFW dataset)

Hi, thank you for your great work.
I checked the performance of the pretrained model (300W-LP, alpha 1 and 300W-LP, alpha 2), the result on AFLW2000 are the same as the paper.
But when I test the models on the AFW dataset, the results are very different.
I wrote my own code to compute the discrete predictions that rounds of the the nearest 15 degree, the yaw accuracy is only 21.15%, and I also check the mean absolute error is 53.3674.
So even if I made some mistake calculating the discrete predictions, the MAE of yaw seems too large. I am wondering which step was missing to reproduce the result in the paper. (I have make sure that the input format are the same as the one required in the datasets.py)

Test script for images

Hi ! when will you upload the test script for images ? I really need the test script for images .

about the dataset anno

Thanks for your work. But I have some question about the anno.txt of datasets. The datasets what I download didn't include a .txt of pose or something, if I want to train my own net follow your work. Could you please can me some advices or tutorial? Thank you very much again.

What is snapshot?

Apologizes for the simple question, but what is the snapshot argument?

i think there are some typos in code

hi natanielruiz im a beginner student of DL, and thank you for opening really great source for us

anyway, i think i found some typo in your code of

pose = utils.get_ypr_fro # Head pose from AFLW2000 datasetp.pi

i think it should be like
pose = utils.get_ypr_from_mat(mat_path)
pitch = pose[0] * 180 / np.pi

thanks :)

Incorrect class number + deprications

As far as I can tell the network is trained to classify between 66 classes but there are in fact 67,
bins = np.array(range(-99, 102, 3)) generates an array from -99 to 99 (inclusive) with 67 elements.

Providing the data is sanitised this is almost never an issue, but as it stands the network doesn't seem to be able handle training data between -99 and 99 degrees (inclusive), it throws an error if something in class 66 (i.e. 99 degrees) comes up as this is outside the number of classes (indexed 0-65).

Also unimportant note that transforms.Scale has been renamed to transforms.Resize in recent torch releases and nn.Softmax now required argument dim=1.

I also wonder why there is no validation during training, as it is it is hard to determine if the network is simply overfitting.

That said this is really cool work, and the results are very impressive. I've hacked together some fixes for these issues and fine-tuned the network for animal head-pose estimation with reasonable success as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.