Giter Club home page Giter Club logo

gordicaleksa / pytorch-original-transformer Goto Github PK

View Code? Open in Web Editor NEW
933.0 32.0 155.0 971 KB

My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.

Home Page: https://youtube.com/c/TheAIEpiphany

License: MIT License

Python 46.88% Jupyter Notebook 53.12%
transformer transformers pytorch-transformer pytorch-transformers attention attention-mechanism attention-is-all-you-need pytorch python jupyter

pytorch-original-transformer's Introduction

The Original Transformer (PyTorch) πŸ’» = 🌈

This repo contains PyTorch implementation of the original transformer paper (:link: Vaswani et al.).
It's aimed at making it easy to start playing and learning about transformers.

Table of Contents

What are transformers

Transformers were originally proposed by Vaswani et al. in a seminal paper called Attention Is All You Need.

You probably heard of transformers one way or another. GPT-3 and BERT to name a few well known ones πŸ¦„. The main idea is that they showed that you don't have to use recurrent or convolutional layers and that simple architecture coupled with attention is super powerful. It gave the benefit of much better long-range dependency modeling and the architecture itself is highly parallelizable (:computer::computer::computer:) which leads to better compute efficiency!

Here is how their beautifully simple architecture looks like:

Understanding transformers

This repo is supposed to be a learning resource for understanding transformers as the original transformer by itself is not a SOTA anymore.

For that purpose the code is (hopefully) well commented and I've included the playground.py where I've visualized a couple of concepts which are hard to explain using words but super simple once visualized. So here we go!

Positional Encodings

Can you parse this one in a glimpse of the eye?

Neither can I. Running the visualize_positional_encodings() function from playground.py we get this:

Depending on the position of your source/target token you "pick one row of this image" and you add it to it's embedding vector, that's it. They could also be learned, but it's just more fancy to do it like this, obviously! πŸ€“

Custom Learning Rate Schedule

Similarly can you parse this one in O(1)?

Noup? So I thought, here it is visualized:

It's super easy to understand now. Now whether this part was crucial for the success of transformer? I doubt it. But it's cool and makes things more complicated. πŸ€“ (.set_sarcasm(True))

Note: model dimension is basically the size of the embedding vector, baseline transformer used 512, the big one 1024

Label Smoothing

First time you hear of label smoothing it sounds tough but it's not. You usually set your target vocabulary distribution to a one-hot. Meaning 1 position out of 30k (or whatever your vocab size is) is set to 1. probability and everything else to 0.

In label smoothing instead of placing 1. on that particular position you place say 0.9 and you evenly distribute the rest of the "probability mass" over the other positions (that's visualized as a different shade of purple on the image above in a fictional vocab of size 4 - hence 4 columns)

Note: Pad token's distribution is set to all zeros as we don't want our model to predict those!

Aside from this repo (well duh) I would highly recommend you go ahead and read this amazing blog by Jay Alammar!

Machine translation

Transformer was originally trained for the NMT (neural machine translation) task on the WMT-14 dataset for:

  • English to German translation task (achieved 28.4 BLEU score)
  • English to French translation task (achieved 41.8 BLEU score)

What I did (for now) is I trained my models on the IWSLT dataset, which is much smaller, for the English-German language pair, as I speak those languages so it's easier to debug and play around.

I'll also train my models on WMT-14 soon, take a look at the todos section.


Anyways! Let's see what this repo can practically do for you! Well it can translate!

Some short translations from my German to English IWSLT model:

Input: Ich bin ein guter Mensch, denke ich. ("gold": I am a good person I think)
Output: ['<s>', 'I', 'think', 'I', "'m", 'a', 'good', 'person', '.', '</s>']
or in human-readable format: I think I'm a good person.

Which is actually pretty good! Maybe even better IMO than Google Translate's "gold" translation.


There are of course failure cases like this:

Input: Hey Alter, wie geht es dir? (How is it going dude?)
Output: ['<s>', 'Hey', ',', 'age', 'how', 'are', 'you', '?', '</s>']
or in human-readable format: Hey, age, how are you?

Which is actually also not completely bad! Because:

  • First of all the model was trained on IWSLT (TED like conversations)
  • "Alter" is a colloquial expression for old buddy/dude/mate but it's literal meaning is indeed age.

Similarly for the English to German model.

Setup

So we talked about what transformers are, and what they can do for you (among other things).
Let's get this thing running! Follow the next steps:

  1. git clone https://github.com/gordicaleksa/pytorch-original-transformer
  2. Open Anaconda console and navigate into project directory cd path_to_repo
  3. Run conda env create from project directory (this will create a brand new conda environment).
  4. Run activate pytorch-transformer (for running scripts from your console or set the interpreter in your IDE)

That's it! It should work out-of-the-box executing environment.yml file which deals with dependencies.
It may take a while as I'm automatically downloading SpaCy's statistical models for English and German.


PyTorch pip package will come bundled with some version of CUDA/cuDNN with it, but it is highly recommended that you install a system-wide CUDA beforehand, mostly because of the GPU drivers. I also recommend using Miniconda installer as a way to get conda on your system. Follow through points 1 and 2 of this setup and use the most up-to-date versions of Miniconda and CUDA/cuDNN for your system.

Usage

Option 1: Jupyter Notebook

Just run jupyter notebook from you Anaconda console and it will open the session in your default browser.
Open The Annotated Transformer ++.ipynb and you're ready to play!


Note: if you get DLL load failed while importing win32api: The specified module could not be found
Just do pip uninstall pywin32 and then either pip install pywin32 or conda install pywin32 should fix it!

Option 2: Use your IDE of choice

You just need to link the Python environment you created in the setup section.

Training

To run the training start the training_script.py, there is a couple of settings you will want to specify:

  • --batch_size - this is important to set to a maximum value that won't give you CUDA out of memory
  • --dataset_name - Pick between IWSLT and WMT14 (WMT14 is not advisable until I add multi-GPU support)
  • --language_direction - Pick between E2G and G2E

So an example run (from the console) would look like this:
python training_script.py --batch_size 1500 --dataset_name IWSLT --language_direction G2E

The code is well commented so you can (hopefully) understand how the training itself works.

The script will:

  • Dump checkpoint *.pth models into models/checkpoints/
  • Dump the final *.pth model into models/binaries/
  • Download IWSLT/WMT-14 (the first time you run it and place it under data/)
  • Dump tensorboard data into runs/, just run tensorboard --logdir=runs from your Anaconda
  • Periodically write some training metadata to the console

Note: data loading is slow in torch text, and so I've implemented a custom wrapper which adds the caching mechanisms and makes things ~30x faster! (it'll be slow the first time you run stuff)

Inference (Translating)

The second part is all about playing with the models and seeing how they translate!
To get some translations start the translation_script.py, there is a couple of settings you'll want to set:

  • --source_sentence - depending on the model you specify this should either be English/German sentence
  • --model_name - one of the pretrained model names: iwslt_e2g, iwslt_g2e or your model(*)
  • --dataset_name - keep this in sync with the model, IWSLT if the model was trained on IWSLT
  • --language_direction - keep in sync, E2G if the model was trained to translate from English to German

(*) Note: after you train your model it'll get dumped into models/binaries see what it's name is and specify it via the --model_name parameter if you want to play with it for translation purpose. If you specify some of the pretrained models they'll automatically get downloaded the first time you run the translation script.

I'll link IWSLT pretrained model links here as well: English to German and German to English.

That's it you can also visualize the attention check out this section. for more info.

Evaluating NMT models

I tracked 3 curves while training:

  • training loss (KL divergence, batchmean)
  • validation loss (KL divergence, batchmean)
  • BLEU-4

BLEU is an n-gram based metric for quantitatively evaluating the quality of machine translation models.
I used the BLEU-4 metric provided by the awesome nltk Python module.

Current results, models were trained for 20 epochs (DE stands for Deutch i.e. German in German πŸ€“):

Model BLEU score Dataset
Baseline transformer (EN-DE) 27.8 IWSLT val
Baseline transformer (DE-EN) 33.2 IWSLT val
Baseline transformer (EN-DE) x WMT-14 val
Baseline transformer (DE-EN) x WMT-14 val

I got these using greedy decoding so it's a pessimistic estimate, I'll add beam decoding soon.

Important note: Initialization matters a lot for the transformer! I initially thought that other implementations using Xavier initialization is again one of those arbitrary heuristics and that PyTorch default init will do - I was wrong:

You can see here 3 runs, the 2 lower ones used PyTorch default initialization (one used mean for KL divergence loss and the better one used batchmean), whereas the upper one used Xavier uniform initialization!


Idea: you could potentially also periodically dump translations for a reference batch of source sentences.
That would give you some qualitative insight into how the transformer is doing, although I didn't do that.
A similar thing is done when you have hard time quantitatively evaluating your model like in GANs and NST fields.

Tracking using Tensorboard

The above plot is a snippet from my Azure ML run but when I run stuff locally I use Tensorboard.

Just run tensorboard --logdir=runs from your Anaconda console and you can track your metrics during the training.

Visualizing attention

You can use the translation_script.py and set the --visualize_attention to True to additionally understand what your model was "paying attention to" in the source and target sentences.

Here are the attentions I get for the input sentence Ich bin ein guter Mensch, denke ich.

These belong to layer 6 of the encoder. You can see all of the 8 multi-head attention heads.

And this one belongs to decoder layer 6 of the self-attention decoder MHA (multi-head attention) module.
You can notice an interesting triangular pattern which comes from the fact that target tokens can't look ahead!

The 3rd type of MHA module is the source attending one and it looks similar to the plot you saw for the encoder.
Feel free to play with it at your own pace!

Note: there are obviously some bias problems with this model but I won't get into that analysis here

Hardware requirements

You really need a decent hardware if you wish to train the transformer on the WMT-14 dataset.

The authors took:

  • 12h on 8 P100 GPUs to train the baseline model and 3.5 days to train the big one.

If my calculations are right that amounts to ~19 epochs (100k steps, each step had ~25000 tokens and WMT-14 has ~130M src/trg tokens) for the baseline and 3x that for the big one (300k steps).

On the other hand it's much more feasible to train the model on the IWSLT dataset. It took me:

  • 13.2 min/epoch (1500 token batch) on my RTX 2080 machine (8 GBs of VRAM)
  • ~34 min/epoch (1500 token batch) on Azure ML's K80s (24 GBs of VRAM)

I could have pushed K80s to 3500+ tokens/batch but had some CUDA out of memory problems.

Todos:

Finally there are a couple more todos which I'll hopefully add really soon:

  • Multi-GPU/multi-node training support (so that you can train a model on WMT-14 for 19 epochs)
  • Beam decoding (turns out it's not that easy to implement this one!)
  • BPE and shared source-target vocab (I'm using SpaCy now)

The repo already has everything it needs, these are just the bonus points. I've tested everything from environment setup, to automatic model download, etc.

Video learning material

If you're having difficulties understanding the code I did an in-depth overview of the paper in this video:

A deep dive into the attention is all you need paper

I have some more videos which could further help you understand transformers:

Acknowledgements

I found these resources useful (while developing this one):

I found some inspiration for the model design in the The Annotated Transformer but I found it hard to understand, and it had some bugs. It was mainly written with researchers in mind. Hopefully this repo opens up the understanding of transformers to the common folk as well! πŸ€“

Citation

If you find this code useful, please cite the following:

@misc{Gordić2020PyTorchOriginalTransformer,
  author = {Gordić, Aleksa},
  title = {pytorch-original-transformer},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/gordicaleksa/pytorch-original-transformer}},
}

Connect with me

If you'd love to have some more AI-related content in your life πŸ€“, consider:

Licence

License: MIT

pytorch-original-transformer's People

Contributors

gordicaleksa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-original-transformer's Issues

torchtext.data import not working in the latest versions of pytorch.

Data manipulation related imports

from torchtext.data import Dataset, BucketIterator, Field, Example
from torchtext.data.utils import interleave_keys
from torchtext import datasets
from torchtext.data import Example

imports under this are not working as the torchtext.data functions such as Dataset, BucketIterator, Field, Example are removed in the latest version of pytorch.

Would it be possible to migrate the pytorch-original-transformer code to the new veresion of pytorch-original-transformer?

sharing weight matrix between the two embedding layers and the pre-softmax linear transformation

Hi, thanks for your repo: helps a lot!
In the paper weight matrix is shared between the two embedding layers and the pre-softmax linear transformation.
"In our model, we share the same weight matrix between the two embedding layers and the pre-softmax
linear transformation, similar to [30]. " (Page 5, Chapter 3.4 Embeddings and Softmax)
Would it be correct to modify in transformer_model.py the following rows to something like this:
rows 32-33 -> self.src_embedding = self.trg_embedding = Embedding(src_vocab_size, model_dimension)
row 50 -> self.decoder_generator = DecoderGenerator(self.src_embedding.embeddings_table.weight)
row 221 -> def init(self, shared_embedding_weights):
row 224 -> self.linear = nn.Linear(shared_embedding_weights.size()[1], shared_embedding_weights.size()[0], bias=False)
del self.linear.weight
self.shared_embedding_weights = shared_embedding_weights
row 232 -> self.linear.weight = self.shared_embedding_weights
row 233 -> return self.log_softmax(self.linear(trg_representations_batch) * math.sqrt(self.shared_embedding_weights.size()[1]))

A environment problem

Thanks for your work.
I met some problem about conda environment creating. The error as follow. Please tell me how to solve thie problem.

$ conda env create

Channels:
 - defaults
 - pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failed
Channels:
 - defaults
 - pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failed

LibMambaUnsatisfiableError: Encountered problems while solving:
  - package pytorch-1.5.0-cpu_py37hd91cbb3_0 requires python >=3.7,<3.8.0a0, but none of the providers can be installed

Could not solve for environment specs
The following packages are incompatible
β”œβ”€ python 3.8.3  is requested and can be installed;
└─ pytorch 1.5.0  is not installable because it requires
   └─ python >=3.7,<3.8.0a0 , which conflicts with any installable versions previously reported.

Issue regarding "9.1 Download pretrained transformers automatically"

While running def translate_a_single_sentence(translation_config): I have encountered an error in which the file en-de.tgz is not recognized as a gzip file. How could I do?
Below, it is reported the snippet of the error :
` downloading en-de.tgz
C:\Users..\pytorch-original-transformer\data\iwslt\en-de.tgz: 97.4kB [00:00, 1.60MB/s]

BadGzipFile Traceback (most recent call last)
~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in gzopen(cls, name, mode, fileobj, compresslevel, **kwargs)
1669 try:
-> 1670 t = cls.taropen(name, mode, fileobj, **kwargs)
1671 except OSError:

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in taropen(cls, name, mode, fileobj, **kwargs)
1646 raise ValueError("mode must be 'r', 'a', 'w' or 'x'")
-> 1647 return cls(name, mode, fileobj, **kwargs)
1648

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in init(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)
1509 self.firstmember = None
-> 1510 self.firstmember = self.next()
1511

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in next(self)
2310 try:
-> 2311 tarinfo = self.tarinfo.fromtarfile(self)
2312 except EOFHeaderError as e:

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in fromtarfile(cls, tarfile)
1101 """
-> 1102 buf = tarfile.fileobj.read(BLOCKSIZE)
1103 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)

~\anaconda3\envs\pytorch-transformer\lib\gzip.py in read(self, size)
291 raise OSError(errno.EBADF, "read() on write-only GzipFile object")
--> 292 return self._buffer.read(size)
293

~\anaconda3\envs\pytorch-transformer\lib_compression.py in readinto(self, b)
67 with memoryview(b) as view, view.cast("B") as byte_view:
---> 68 data = self.read(len(byte_view))
69 byte_view[:len(data)] = data

~\anaconda3\envs\pytorch-transformer\lib\gzip.py in read(self, size)
478 self._init_read()
--> 479 if not self._read_gzip_header():
480 self._size = self._pos

~\anaconda3\envs\pytorch-transformer\lib\gzip.py in _read_gzip_header(self)
426 if magic != b'\037\213':
--> 427 raise BadGzipFile('Not a gzipped file (%r)' % magic)
428

BadGzipFile: Not a gzipped file (b'<!')

During handling of the above exception, another exception occurred:

ReadError Traceback (most recent call last)
in
85
86 # Translate the given source sentence
---> 87 translate_a_single_sentence(translation_config)

in translate_a_single_sentence(translation_config)
5 print(2)
6 # Step 1: Prepare the field processor (tokenizer, numericalizer)
----> 7 _, _, src_field_processor, trg_field_processor = get_datasets_and_vocabs(
8 translation_config['dataset_path'],
9 translation_config['language_direction'],

in get_datasets_and_vocabs(dataset_path, language_direction, use_iwslt, use_caching_mechanism)
41 dataset_split_fn = datasets.IWSLT.splits if use_iwslt else datasets.WMT14.splits
42
---> 43 train_dataset, val_dataset, test_dataset = dataset_split_fn(
44 exts=(src_ext, trg_ext),
45 fields=fields,

~\anaconda3\envs\pytorch-transformer\lib\site-packages\torchtext\datasets\translation.py in splits(cls, exts, fields, root, train, validation, test, **kwargs)
142 cls.urls = [cls.base_url.format(exts[0][1:], exts[1][1:], cls.dirname)]
143 check = os.path.join(root, cls.name, cls.dirname)
--> 144 path = cls.download(root, check=check)
145
146 train = '.'.join([train, cls.dirname])

~\anaconda3\envs\pytorch-transformer\lib\site-packages\torchtext\data\dataset.py in download(cls, root, check)
189 # tarfile cannot handle bare .gz files
190 elif ext == '.tgz' or ext == '.gz' and ext_inner == '.tar':
--> 191 with tarfile.open(zpath, 'r:gz') as tar:
192 dirs = [member for member in tar.getmembers()]
193 tar.extractall(path=path, members=dirs)

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)
1615 else:
1616 raise CompressionError("unknown compression type %r" % comptype)
-> 1617 return func(name, filemode, fileobj, **kwargs)
1618
1619 elif "|" in mode:

~\anaconda3\envs\pytorch-transformer\lib\tarfile.py in gzopen(cls, name, mode, fileobj, compresslevel, **kwargs)
1672 fileobj.close()
1673 if mode == 'r':
-> 1674 raise ReadError("not a gzip file")
1675 raise
1676 except:

ReadError: not a gzip file`

Thank you very much!

Error when running "python training_script.py --batch_size 100 --dataset_name IWSLT --language_direction G2E

Not sure what is going on here but the best that I can tell is that there is a gzip file that seems to be missing.

Thank You
Tom

Traceback (most recent call last):
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1670, in gzopen
t = cls.taropen(name, mode, fileobj, **kwargs)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1647, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1510, in init
self.firstmember = self.next()
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 2311, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1102, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/gzip.py", line 292, in read
return self._buffer.read(size)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/gzip.py", line 479, in read
if not self._read_gzip_header():
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/gzip.py", line 427, in _read_gzip_header
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b'<!')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "training_script.py", line 192, in
train_transformer(training_config)
File "training_script.py", line 103, in train_transformer
train_token_ids_loader, val_token_ids_loader, src_field_processor, trg_field_processor = get_data_loaders(
File "/home/tom/Downloads/pytorch-original-transformer/utils/data_utils.py", line 223, in get_data_loaders
train_dataset, val_dataset, src_field_processor, trg_field_processor = get_datasets_and_vocabs(dataset_path, language_direction, dataset_name == DatasetType.IWSLT.name)
File "/home/tom/Downloads/pytorch-original-transformer/utils/data_utils.py", line 151, in get_datasets_and_vocabs
train_dataset, val_dataset, test_dataset = dataset_split_fn(
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/site-packages/torchtext/datasets/translation.py", line 144, in splits
path = cls.download(root, check=check)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/site-packages/torchtext/data/dataset.py", line 191, in download
with tarfile.open(zpath, 'r:gz') as tar:
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1617, in open
return func(name, filemode, fileobj, **kwargs)
File "/home/tom/anaconda3/envs/pytorch-transformer/lib/python3.8/tarfile.py", line 1674, in gzopen
raise ReadError("not a gzip file")
tarfile.ReadError: not a gzip file

Frequency in the positional encodings

What does the frequency represent in positional encoding ?
Why do we need to multiply it with the positional values?

frequencies = torch.pow(10000., -torch.arange(0, model_dimension, 2, dtype=torch.float) / model_dimension)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.