Giter Club home page Giter Club logo

pytorch_neural_crf's Introduction

LSTM/BERT-CRF Model for Named Entity Recognition (or Sequence Labeling)

This repository implements an LSTM-CRF model for named entity recognition. The model is same as the one by Lample et al., (2016) except we do not have the last tanh layer after the BiLSTM. We achieve the SOTA performance on both CoNLL-2003 and OntoNotes 5.0 English datasets (check our benchmark with Glove and ELMo, other and benchmark results with fine-tuning BERT).

Announcements

  • We implemented distributed training for faster training
  • We implemented a Faster CRF module which allows O(log N) inference and back-tracking!
  • Benchmark results by fine-tuning BERT/Roberta**
Model Dataset Precision Recall F1
BERT-base-cased + CRF (this repo) CONLL-2003 91.69 92.05 91.87
Roberta-base + CRF (this repo) CoNLL-2003 91.88 93.01 92.44
BERT-base-cased + CRF (this repo) OntoNotes 5 89.57 89.45 89.51
Roberta-base + CRF (this repo) OntoNotes 5 90.12 91.25 90.68

More details

Requirements

  • Python >= 3.6 and PyTorch >= 1.6.0 (tested)
  • pip install transformers
  • pip install datasets
  • pip install accelerate (optional for distributed training)
  • pip install seqeval (optional, only used in evaluation while in distributed training)

In the documentation below, we present two ways for users to run the code:

  1. Run the model via (Fine-tuning) BERT/Roberta/etc in Transformers package.
  2. Run the model with simply word embeddings (and static ELMo/BERT representations loaded from external vectors).

Our default argument setup refers to the first one 1.

Usage with Fine-Tuning BERT/Roberta (,etc) models in HuggingFace

  1. Simply replace the embedder_type argument with the model in HuggingFace. For example, if we are using roberta-large, we just need to change the embedder type as roberta-large.

    python transformers_trainer.py --device=cuda:0 --dataset=YourData --model_folder=saved_models --embedder_type=roberta-base
  2. Distributed Training (If necessary)

    1. We use huggingface accelerate package to enable distributed training. After you set the proper configuration of your distributed environment, by accelerate config, you can easily run the following command for distributed training
    accelerate launch transformers_trainer_ddp.py --batch_size=30 {YOUR_OTHER_ARGUMENTS}

    Note that, this batch size is actually batch_size per GPU device.

  3. (Optional) Using other models in HuggingFace.

    1. Run the main file with modified argument embedder_type:

      python trainer.py --embedder_type=bert-large-cased

      The default value for embedder_type is roberta-base. Changing the name to something like bert-base-cased or roberta-large, we directly load the model from huggingface. Note: if you use other models, remember to replace the tokenization mechanism in config/utils.py.

      Our default tokenizer is assumed to be fast_tokenizer. If your tokenizer does not support fast mode, try set use_fast=False:

      tokenizer = AutoTokenizer.from_pretrained(conf.embedder_type, add_prefix_space=True, use_fast=False)
    2. Finally, if you would like to know more about the details, read more details below:

      • Tokenization: For BERT, we use the first wordpice to represent a complete word. Check config/transformers_util.py
      • Embedder: We show how to embed the input tokens to make word representation. Check model/embedder/transformers_embedder.py
    3. Using BERT/Roberta as contextualized word embeddings (Static, Feature-based Approach) Simply go to model/transformers_embedder.py and uncomment the following:

      self.model.requires_grad = False

Other Usages

Using Word embedding or external contextualized embedding (ELMo/BERT) can be found in here.

Training with your own data.

  1. Create a folder YourData under the data directory.
  2. Put the train.txt, dev.txt and test.txt files (make sure the format is compatible, i.e. the first column is words and the last column are tags) under this directory. If you have a different format, simply modify the reader in config/reader.py.
  3. Change the dataset argument to YourData when you run trainer.py.

Further Details and Extensions

  1. Benchmark Performance
  2. Benchmark on BERT/Roberta

Ongoing Plan

  • Support for ELMo/BERT as features
  • Interactive model where we can just import model and decode a setence
  • Make the code more modularized (separate the encoder and inference layers) and readable (by adding more comments)
  • Put the benchmark performance documentation to another markdown file
  • Integrate BERT as a module instead of just features.
  • Clean up the code to better organization (e.g., import stuff)
  • Benchmark experiments for Transformers' based models.
  • Support FP-16 training/inference
  • Support distributed training using accelerate
  • Releases some pre-trained NER models.
  • Semi-CRF model support

pytorch_neural_crf's People

Contributors

allanj avatar furkan-celik avatar yuchenlin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch_neural_crf's Issues

computation of partition function

I am trying to compute probability from viterbi score but could not get the values to make sense.

As a sanity check I tried pytorch crf https://pytorch-crf.readthedocs.io/en/stable/ and its z values matched with my naive implementation , but could not get the same results with this implementation.
Is there any way to verify the computation of unlabeled score?
I used the same potential and pairwise potential matrices.

I looked for tests in the repository but could not find it. Any suggestion for writing a test case would be really helpful.

Please help me?

Q1: Can you tell me how to set the appropriate {YOUR_OTHER_ARGUMENTS} in this command: accelerate launch transformers_trainer_ddp.py --batch_size=30 {YOUR_OTHER_ARGUMENTS}.

Q2:when I run this command: python trainer.py --embedder_type=bert-large-cased,
an error occurred:
Traceback (most recent call last):
File "trainer.py", line 12, in
from src.config import context_models, get_metric
ImportError: cannot import name 'context_models' from 'src.config' (/home/zhang/compatibility_analysis/pytorch_neural_crf/src/config/init.py)

Can you help me fix this issue?

About CRF layer

Hi Allan,
I also have some questions for CRF.

 1. I have trained my BERT-CRF code, but I found it's not better than transformers BertForTokenClassification class for my dataset. Do you know what's the problem that CRF doesn't work for my situation?
 2. I have not read the CRF code carefully because it's a little hard to understand, can you give us some references/comments to understand it quickly and clearly?

Thank you for you help!

Evaluation question

Sorry for opening another issue again :)

output_spans.add(Span(start, end, output[i][2:]))

For this evaluation part, what I have found before is that your evaluation using some useful tricks.

For this example:

I like Beijing University
O O B-City E-University

The results will be (Beijing University, University), it will be calculated correctly, am I right?

Thank you for your reply!

Use custom pre-trained BERT models

Hi,
Suppose we have pretrained BERT on custom dataset.
How do we use this BERT model instead of default Hugging Face model?

Follow up question:
I understand that the repo has a feature to use ELMo/BERT's contextual embeddings in a static manner by pre-computing the embeddings for the whole dataset. Any suggestion on how to do this or where to look in the code?

AttributeError: 'str' object has no attribute 'size'

when I run the command
"python transformers_trainer.py --device=cuda:0 --dataset=conll2003_sample --model_folder=saved_models --embedder_type=bert-base-cased"
Error output is : AttributeError: 'str' object has no attribute 'size'
is it the version of transformers? The version of transformers I used is 4.0.0

About orig_to_token_index padding problem

I have some problems in your dataset preprocessing code.

In the model/transformers_embedder.py file, we have TransformersEmbedder class. For the forward function, the return is (marked as code [1])

return torch.gather(word_rep[:, :, :], 1, orig_to_token_index.unsqueeze(-1).expand(batch_size, max_sent_len, rep_size))

As we know, this is a gather function which will use orig_to_token_index as the index. But for the batch, different sentences will have different length. So I observed that in your preprocessing batch sample code, which is in the data/transformers_dataset.py(marked as code [2])
orig_to_tok_index = feature.orig_to_tok_index + [0] * padding_word_len
label_ids = feature.label_ids + [0] * padding_word_len

You use [0] * padding_word_len to pad orig_to_tok_index and label_ids. So if we run code [1], it will get 0 index in the padding position, so we get [CLS] embedding vector from the Bert in the padding position. And then predict [CLS] embedding to the 0 label index(PAD's index in your code).

I think it's a little wired to use 0 padding in orig_to_token_index list and then predict [CLS] to PAD label, do we need to change it? Or just I misunderstanded the logic?

Hope to receive your explanation.
Thank you very much!

crf score

tagTransScoresMiddle = torch.gather(currentTagScores[:, 1:, :], 2, tags[:, : sentLength - 1].view(batchSize, sentLength - 1, 1)).view(batchSize, -1) (linear_rf_inferencer.py)

sorry for bothering you again.
why second index start at 1 for currentTagScores ? because first token (bert embedding) is '[CLS]' ?

THANK YOU.

get error when running in torch 1.81

return torch.gather(word_rep[:, 1:, :], 1, orig_to_token_index.unsqueeze(-1).expand(batch_size, max_sent_len, rep_size))
RuntimeError: gather_out_cuda(): Expected dtype int64 for index

How to using CPU for elmo?

I run below cmd at my CPU environment as the readme said

python -m preprocess.get_elmo_vec data/conll2003/

but get errors below, since I do not have GPU

Traceback (most recent call last):
  File "/root/anaconda3/envs/pt_lstmcrf/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/root/anaconda3/envs/pt_lstmcrf/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tmp/pytorch_lstmcrf/preprocess/get_elmo_vec.py", line 99, in <module>
    get_vector()
  File "/home/tmp/pytorch_lstmcrf/preprocess/get_elmo_vec.py", line 75, in get_vector
    elmo = load_elmo(cuda_device)
  File "/home/tmp/pytorch_lstmcrf/preprocess/get_elmo_vec.py", line 38, in load_elmo
    return ElmoEmbedder(cuda_device=cuda_device)
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx

Would you share the steps to get elmo word embedding by CPU ?

Summary of training/finetuning/prediction commands

Hi,
Really impressive work and a nice repository combining all the NER models.
I was trying to comprehend the codebase and what all are its capabilities. To that end, I made a list of NER-baselines and a corresponding command. Can authors of this repo @allanj @furkan-celik @yuchenlin help me complete this list?

Model Embeddings Sample Command Notes
BiLSTM Random
BiLSTM + CharCNN Random NA
BiLSTM + CharLSTM Random python trainer.py --use_crf_rnn 0
BiLSTM + CharCNN + CRF Random NA
BiLSTM + CharLSTM + CRF Random python trainer.py --use_crf_rnn 1
BiLSTM + CharLSTM + CRF FastText
BiLSTM + CharLSTM + CRF static embedding from ELMo
BiLSTM + CharLSTM + CRF static embedding from BERT
BiLSTM + CharLSTM + CRF contextual embedding from ELMo
BiLSTM + CharLSTM + CRF contextual embedding from BERT
Default bert-base-uncased -
Default bert-large-uncased -
Finetuned bert-base-uncased -
Finetuned bert-base-uncased concatenated with pretrained FastText embedding
Default roberta-base -
Finetuned roberta-base -
Finetuned roberta-base concatenated with pretrained FastText embedding

I understand some of these configurations might not be supported and there might be additional configurations which this repo supports. So feel free to add/modify the above table. I feel such a one-stop table will help the community and could be a contribution towards documentation.

-Nitesh

Errors when running the default model

I ran the model with Glove embeddings without any modifications (python trainer) and I got this error

Traceback (most recent call last):
  File "trainer.py", line 237, in <module>
    main()
  File "trainer.py", line 191, in main
    conf = Config(opt)
  File "/home/user/pytorch_lstmcrf/src/config/config.py", line 75, in __init__
    self.print_detail_f1 = args.print_detail_f1
AttributeError: 'Namespace' object has no attribute 'print_detail_f1'

If I set both lines 75 and 76 to True then I get this error. Is this behaviour intended with the default dataset included in the repo?

(pt_lstmcrf) ➜  pytorch_lstmcrf git:(master) ✗ python trainer.py 
device: cpu
seed: 42
dataset: conll2003_sample
embedding_file: data/glove.6B.100d.txt
embedding_dim: 100
optimizer: sgd
learning_rate: 0.01
l2: 1e-08
lr_decay: 0
batch_size: 10
num_epochs: 100
train_num: -1
dev_num: -1
test_num: -1
max_no_incre: 100
model_folder: english_model
hidden_dim: 200
dropout: 0.5
use_char_rnn: 1
static_context_emb: none
add_iobes_constraint: 0
reading the pretraing embedding: data/glove.6B.100d.txt
[Warning] pretrain embedding file not exists, using random embedding
[Data Info] Reading dataset from: 
data/conll2003_sample/train.txt
data/conll2003_sample/dev.txt
data/conll2003_sample/test.txt
[Data Info] Reading file: data/conll2003_sample/train.txt, labels will be converted to IOBES encoding
[Data Info] Modify src/data/ner_dataset.read_txt function if you have other requirements
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79/79 [00:00<00:00, 470667.64it/s]
number of sentences: 5
[Data Info] Using the training set to build label index
#labels: 11
label 2idx: {'<PAD>': 0, 'S-ORG': 1, 'O': 2, 'S-MISC': 3, 'B-PER': 4, 'E-PER': 5, 'S-LOC': 6, 'B-ORG': 7, 'E-ORG': 8, '<START>': 9, '<STOP>': 10}
[Data Info] Reading file: data/conll2003_sample/dev.txt, labels will be converted to IOBES encoding
[Data Info] Modify src/data/ner_dataset.read_txt function if you have other requirements
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51/51 [00:00<00:00, 446575.16it/s]
number of sentences: 3
Traceback (most recent call last):
  File "trainer.py", line 237, in <module>
    main()
  File "trainer.py", line 200, in main
    label2idx=train_dataset.label2idx, is_train=False)
  File "/home/Projekte/pytorch_lstmcrf/src/data/ner_dataset.py", line 43, in __init__
    check_all_labels_in_dict(insts=insts, label2idx=self.label2idx)
  File "/home/Projekte/pytorch_lstmcrf/src/data/data_utils.py", line 68, in check_all_labels_in_dict
    raise ValueError(f"The label {label} does not exist in label2idx dict. The label might not appear in the training set.")
ValueError: The label B-MISC does not exist in label2idx dict. The label might not appear in the training set.

Ubuntu 20.04

Can not run ner_predictor.py

Hi allanj

I trained a lstm-crf model, and try to predict by runing ner_predictor.py, but it seems like that this file need to update。
I can not find simple_batching function or Sentence class

thank you

Macro F1 and Precision

Hi,

I could not found any mention of using micro or macro of metrics but is it possible for you to implement macro f1 too? I tried to write in my own but could not figure out the codeflow and dependencies between functions. However, as far as I understand by looking at get_metric function, you are calculating micro f1. It is good but I believe that macro f1 gives more accurate result since token based classification tasks can be tremendously imbalanced. In addition if you can also exclude "O" tag or at least give a parameter to toggle it, it would be great since "O" tag is predominant in many token classification tasks.

Thank you for considering this issue

question about parameter

currentTagScores = torch.gather(all_scores, 3, tags.view(batchSize, sentLength, 1, 1).expand(batchSize, sentLength, self.label_size, 1)).view(batchSize, -1, self.label_size)
(code from linear_crf_inference.py)

cannot figure out why it is 3 in gather function? all_scores is a 3d tensor, the max dim should be 2.
thank you

Segmentation fault

Hi,
I was running trainer.py with gpu.
Then I got follow error:

device: cpu seed: 42 digit2zero: True dataset: conll2003 embedding_file: data/glove.6B.100d.txt embedding_dim: 100 optimizer: sgd learning_rate: 0.01 momentum: 0.0 l2: 1e-08 lr_decay: 0 batch_size: 10 num_epochs: 100 train_num: -1 dev_num: -1 test_num: -1 model_folder: english_model hidden_dim: 200 dropout: 0.5 use_char_rnn: 1 context_emb: none reading the pretraing embedding: data/glove.6B.100d.txt 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 400000/400000 [00:40<00:00, 9893.97it/s] Reading file: data/conll2003/train.txt 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 217662/217662 [00:01<00:00, 161670.34it/s] number of sentences: 14041 Reading file: data/conll2003/dev.txt 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 54612/54612 [00:00<00:00, 159794.60it/s] number of sentences: 3250 Reading file: data/conll2003/test.txt 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49888/49888 [00:00<00:00, 186748.58it/s] number of sentences: 3453 #labels: 20 label 2idx: {'<PAD>': 0, 'S-ORG': 1, 'O': 2, 'S-MISC': 3, 'B-PER': 4, 'E-PER': 5, 'S-LOC': 6, 'B-ORG': 7, 'E-ORG': 8, 'I-PER': 9, 'S-PER': 10, 'B-MISC': 11, 'I-MISC': 12, 'E-MISC': 13, 'I-ORG': 14, 'B-LOC': 15, 'E-LOC': 16, 'I-LOC': 17, '<START>': 18, '<STOP>': 19} Building the embedding table for vocabulary... [Info] Use the pretrained word embedding to initialize: 25305 x 100 num chars: 77 num words: 25305 [Info] Building character-level LSTM [Model Info] Input size to LSTM: 150 [Model Info] LSTM Hidden Size: 200 [Model Info] Final Hidden Size: 200 Using SGD: lr is: 0.01, L2 regularization is: 1e-08 number of instances: 14041 [Shuffled] Shuffle the training instance ids [Info] The model will be saved to: english_model.tar.gz learning rate is set to: 0.01 Segmentation fault (core dumped)

load data error

Hi Allan,
i need your help,
i run this:
python transformers_trainer.py --device=cuda:0 --dataset=conll2003_sample --model_folder=saved_models --embedder_type=roberta-base
then:
09/15/2022 19:48:30 - INFO - main - device: cuda:0
09/15/2022 19:48:30 - INFO - main - seed: 42
09/15/2022 19:48:30 - INFO - main - dataset: conll2003_sample
09/15/2022 19:48:30 - INFO - main - optimizer: adamw
09/15/2022 19:48:30 - INFO - main - learning_rate: 2e-05
09/15/2022 19:48:30 - INFO - main - momentum: 0.0
09/15/2022 19:48:30 - INFO - main - l2: 1e-08
09/15/2022 19:48:30 - INFO - main - lr_decay: 0
09/15/2022 19:48:30 - INFO - main - batch_size: 30
09/15/2022 19:48:30 - INFO - main - num_epochs: 100
09/15/2022 19:48:30 - INFO - main - train_num: -1
09/15/2022 19:48:30 - INFO - main - dev_num: -1
09/15/2022 19:48:30 - INFO - main - test_num: -1
09/15/2022 19:48:30 - INFO - main - max_no_incre: 80
09/15/2022 19:48:30 - INFO - main - max_grad_norm: 1.0
09/15/2022 19:48:30 - INFO - main - fp16: 0
09/15/2022 19:48:30 - INFO - main - model_folder: saved_models
09/15/2022 19:48:30 - INFO - main - hidden_dim: 0
09/15/2022 19:48:30 - INFO - main - dropout: 0.5
09/15/2022 19:48:30 - INFO - main - embedder_type: roberta-base
09/15/2022 19:48:30 - INFO - main - add_iobes_constraint: 0
09/15/2022 19:48:30 - INFO - main - print_detail_f1: 0
09/15/2022 19:48:30 - INFO - main - earlystop_atr: micro
09/15/2022 19:48:30 - INFO - main - mode: train
09/15/2022 19:48:30 - INFO - main - test_file: data/conll2003_sample/test.txt
09/15/2022 19:48:30 - INFO - main - [Data Info] Tokenizing the instances using 'roberta-base' tokenizer
Ignored unknown kwargs option trim_offsets
09/15/2022 19:48:35 - INFO - main - [Data Info] Reading dataset from:
data/conll2003_sample/train.txt
data/conll2003_sample/dev.txt
data/conll2003_sample/test.txt
09/15/2022 19:48:35 - INFO - src.data.transformers_dataset - [Data Info] Reading file: data/conll2003_sample/train.txt, labels will be converted to IOBES encoding
09/15/2022 19:48:35 - INFO - src.data.transformers_dataset - [Data Info] Modify src/data/transformers_dataset.read_txt function if you have other requirements
100%|████████████████████████████████████████████████████| 79/79 [00:00<00:00, 147990.18it/s]
09/15/2022 19:48:35 - INFO - src.data.transformers_dataset - number of sentences: 5
09/15/2022 19:48:35 - INFO - src.data.transformers_dataset - [Data Info] Using the training
set to build label index
09/15/2022 19:48:35 - INFO - src.data.data_utils - #labels: 11
09/15/2022 19:48:35 - INFO - src.data.data_utils - label 2idx: {'': 0, 'S-ORG': 1, 'O': 2, 'S-MISC': 3, 'B-PER': 4, 'E-PER': 5, 'S-LOC': 6, 'B-ORG': 7, 'E-ORG': 8, '': 9, '': 10}
09/15/2022 19:48:35 - INFO - src.data.transformers_dataset - [Data Info] We are not limiting the max length in tokenizer. You should be aware of that
Traceback (most recent call last):
File "D:\Anaconda3\envs\py37torch17\lib\site-packages\transformers\tokenization_utils_base.py", line 245, in getattr
return self.data[item]
KeyError: 'word_ids'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "transformers_trainer.py", line 260, in
main()
File "transformers_trainer.py", line 223, in main
train_dataset = TransformersNERDataset(conf.train_file, tokenizer, number=conf.train_num,
is_train=True)
File "D:\pyProj\pytorch_neural_crf-master\src\data\transformers_dataset.py", line 95, in init
self.insts_ids = convert_instances_to_feature_tensors(insts, tokenizer, label2idx)
File "D:\pyProj\pytorch_neural_crf-master\src\data\transformers_dataset.py", line 38, in convert_instances_to_feature_tensors
subword_idx2word_idx = res.word_ids(batch_index=0)
File "D:\Anaconda3\envs\py37torch17\lib\site-packages\transformers\tokenization_utils_base.py", line 247, in getattr
raise AttributeError
AttributeError

i think some thing cant run in pytorch_neural_crf-master\src\data\transformers_dataset.py
i try some times, i dont solve this, can you help me? thank you

Is there a way to turn off CRF and use a dense layer as output decoder?

Hi,

Nice project!

I am trying to do an experiment that compares the performance if we turn off the CRF layer use a dense layer as output decoder.
`
#input size: size of lstm hidden states, output size: label size
linear = nn.Linear(lstm_scores.size()[2], label_size)
out = linear(lstm_scores)
out = F.log_softmax(out, dim=1)

#pick the index of largest for each word
decodeIdx = torch.argmax(inputs, dim=2)
return decodeIdx`

But the precision, recall and F1 score are really low, as 1.5 or something.
Any suggestion about how to implement this?
Thanks in advance!

FileNotFound Error even the folder has been created

Hi
I have created the model_files and results folders as the instruction required.
But I still get this error:

FileNotFoundError: [Errno 2] No such file or directory: 'model_files/lstm_200_crf_conll2003_-1_dep_none_elmo_sgd_lr_0.01.m'

Could you please help me with that? Thanks

My working enviroment:
Anaconda, pytorch 1.1.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.