Giter Club home page Giter Club logo

wn's Introduction

Wn logo
a Python library for wordnets
PyPI link Python Support tests Documentation Status Conda-Forge Version
Available Wordnets | Documentation | FAQ | Migrating from NLTK | Roadmap


Wn is a Python library for exploring information in wordnets.

Installation

Install it from PyPI using pip:

pip install wn

Or install using conda from the conda-forge channel (conda-forge/wn-feedstock):

conda install -c conda-forge wn

Getting Started

First, download some data:

python -m wn download oewn:2023  # the Open # English WordNet 2023

Now start exploring:

>>> import wn
>>> en = wn.Wordnet('oewn:2023')        # Create Wordnet object to query
>>> ss = en.synsets('win', pos='v')[0]  # Get the first synset for 'win'
>>> ss.definition()                     # Get the synset's definition
'be the winner in a contest or competition; be victorious'

Features

Available Wordnets

Any WN-LMF-formatted wordnet can be added to Wn's database from a local file or remote URL, but Wn also maintains an index (see wn/index.toml) of available projects, similar to a package manager for software, to aid in the discovery and downloading of new wordnets. The projects in this index are listed below.

English Wordnets

There are several English wordnets available. In general it is recommended to use the latest Open English Wordnet, but if you have stricter compatibility needs for, e.g., experiment replicability, you may try the OMW English Wordnet based on WordNet 3.0 (compatible with the Princeton WordNet 3.0 and with the NLTK), or OpenWordnet-EN (for use with the Portuguese wordnet OpenWordnet-PT).

Name Specifier # Synsets Notes
Open English WordNet oewn:2023
oewn:2022
oewn:2021
ewn:2020
ewn:2019
120135
120068
120039
120053
117791
Recommended
 
 
 
 
OMW English Wordnet based on WordNet 3.0 omw-en:1.4 117659 Included with omw:1.4
OMW English Wordnet based on WordNet 3.1 omw-en31:1.4 117791
OpenWordnet-EN own-en:1.0.0 117659 Included with own:1.0.0

Other Wordnets and Collections

These are standalone non-English wordnets and collections. The wordnets of each collection are listed further down.

Name Specifier # Synsets Language
Open Multilingual Wordnet omw:1.4 n/a multiple [mul]
Open German WordNet odenet:1.4
odenet:1.3
36268
36159
German [de]
Open Wordnets for Portuguese and English own:1.0.0 n/a multiple [mul]
KurdNet kurdnet:1.0 2144 Kurdish [ckb]

Open Multilingual Wordnet (OMW) Collection

The Open Multilingual Wordnet collection (omw:1.4) installs the following lexicons (from here) which can also be downloaded and installed independently:

Name Specifier # Synsets Language
Albanet omw-sq:1.4 4675 Albanian [sq]
Arabic WordNet (AWN v2) omw-arb:1.4 9916 Arabic [arb]
BulTreeBank Wordnet (BTB-WN) omw-bg:1.4 4959 Bulgarian [bg]
Chinese Open Wordnet omw-cmn:1.4 42312 Mandarin (Simplified) [cmn-Hans]
Croatian Wordnet omw-hr:1.4 23120 Croatian [hr]
DanNet omw-da:1.4 4476 Danish [da]
FinnWordNet omw-fi:1.4 116763 Finnish [fi]
Greek Wordnet omw-el:1.4 18049 Greek [el]
Hebrew Wordnet omw-he:1.4 5448 Hebrew [he]
IceWordNet omw-is:1.4 4951 Icelandic [is]
Italian Wordnet omw-iwn:1.4 15563 Italian [it]
Japanese Wordnet omw-ja:1.4 57184 Japanese [ja]
Lithuanian WordNet omw-lt:1.4 9462 Lithuanian [lt]
Multilingual Central Repository omw-ca:1.4 45826 Catalan [ca]
Multilingual Central Repository omw-eu:1.4 29413 Basque [eu]
Multilingual Central Repository omw-gl:1.4 19312 Galician [gl]
Multilingual Central Repository omw-es:1.4 38512 Spanish [es]
MultiWordNet omw-it:1.4 35001 Italian [it]
Norwegian Wordnet omw-nb:1.4 4455 Norwegian (Bokmål) [nb]
Norwegian Wordnet omw-nn:1.4 3671 Norwegian (Nynorsk) [nn]
OMW English Wordnet based on WordNet 3.0 omw-en:1.4 117659 English [en]
Open Dutch WordNet omw-nl:1.4 30177 Dutch [nl]
OpenWN-PT omw-pt:1.4 43895 Portuguese [pt]
plWordNet omw-pl:1.4 33826 Polish [pl]
Romanian Wordnet omw-ro:1.4 56026 Romanian [ro]
Slovak WordNet omw-sk:1.4 18507 Slovak [sk]
sloWNet omw-sl:1.4 42583 Slovenian [sl]
Swedish (SALDO) omw-sv:1.4 6796 Swedish [sv]
Thai Wordnet omw-th:1.4 73350 Thai [th]
WOLF (Wordnet Libre du Français) omw-fr:1.4 59091 French [fr]
Wordnet Bahasa omw-id:1.4 38085 Indonesian [id]
Wordnet Bahasa omw-zsm:1.4 36911 Malaysian [zsm]

Open Wordnet (OWN) Collection

The Open Wordnets for Portuguese and English collection (own:1.0.0) installs the following lexicons (from here) which can also be downloaded and installed independently:

Name Specifier # Synsets Language
OpenWordnet-PT own-pt:1.0.0 52670 Portuguese [pt]
OpenWordnet-EN own-en:1.0.0 117659 English [en]

Collaborative Interlingual Index

While not a wordnet, the Collaborative Interlingual Index (CILI) represents the interlingual backbone of many wordnets. Wn, including interlingual queries, will function without CILI loaded, but adding it to the database makes available the full list of concepts, their status (active, deprecated, etc.), and their definitions.

Name Specifier # Concepts
Collaborative Interlingual Index cili:1.0 117659

Changes to the Index

ewnoewn

The 2021 version of the Open English WordNet (oewn:2021) has changed its lexicon ID from ewn to oewn, so the index is updated accordingly. The previous versions are still available as ewn:2019 and ewn:2020.

pwnomw-en, omw-en31

The wordnet formerly called the Princeton WordNet (pwn:3.0, pwn:3.1) is now called the OMW English Wordnet based on WordNet 3.0 (omw-en) and the OMW English Wordnet based on WordNet 3.1 (omw-en31). This is more accurate, as it is a OMW-produced derivative of the original WordNet data, and it also avoids license or trademark issues.

*wnomw-* for OMW wordnets

All OMW wordnets have changed their ID scheme from ...wn to omw-.. and the version no longer includes +omw (e.g., bulwn:1.3+omw is now omw-bg:1.4).

wn's People

Contributors

alvations avatar cclauss avatar fdion1010 avatar goodmami avatar hypercookie avatar sugatoray avatar yoyo-go avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

wn's Issues

Support xz compression (lzma)

lzma has better compression ratios than gzip but requires more memory. Since transmitting XML over the wire can take a while, better compression would be good.

Add default license to project

Currently each version of an indexed project has its own license. While it's possible for a project to relicense with a new version, I think it's very unlikely, so it would be more convenient to have a default license defined at the project level. E.g.,

config.add_project('ewn', 'Open English Wordnet', 'en', license='...')
config.add_version('ewn', '2019', url='...')
config.add_version('ewn', '2020', url='...')

Then wn.config.get_project_info() would use the version license if present and the default project license otherwise.

Pre-release on PyPI

There is increasing interest in this module, so it would be nice to publish it so it's easier to install.

I propose to replace the wn project on PyPI with this repository. The old versions (through 0.0.23) of the existing project would remain on PyPI.

I have permissions to push to https://pypi.org/project/wn/ but doing so would overwrite the project that's there with a completely different codebase. The existing code for that project is at https://github.com/nltk/wordnet/ (for which I also have write permissions) and it appears to have some usage (see the list of dependents), but I think development has stalled.

Add FAQ document

This library does things differently than previous ones. There are bound to be some questions. Some might be:

  • Why is downloading/building slower than the NLTK?
  • Where are the Lemma objects? What are these Word and Sense objects?
  • Why can't I get a Synset via the dog.n.01 kind of IDs?
  • Why is the database so big?
  • Why don't all wordnets share the same synsets?

We can provide a FAQ document that answers questions like these.

Support empty synsets

When getting a synset for an ILI that has no synset in some lexicon, we should return some kind of empty synset instead of nothing. This would be the case both when an explicit ILI is given and when traversing synset relations.

Traverse relations from other lexicons

With the OMW it is implicit that all the wordnets loaded will share relations; even synsets are shared. But in this project, every synset belongs only to the lexicon that defines it. For projects that depend on the Princeton WordNet (or another wordnet) for its structure, there is currently no implemented way to explore the structure when starting from the depending project's synsets, but this can be done by hopping to synsets sharing an ILI. There are potential issues, such as when the relation goes to a synset with no corresponding ILI-linked synset in the desired language/lexicon.

Load project index from a TOML file

Using TOML means another dependency, but it's a pretty standard one these days, like setuptools.

This issue (or a similar solution) is required by #43. Below illustrates my proposed schema (and assuming #51 is resolved):

[ewn]
label = "Open English Wordnet"
language = "en"
license = "https://creativecommons.org/licenses/by/4.0/"

[ewn.versions.2019]
url = "https://en-word.net/static/english-wordnet-2019.xml.gz"

[ewn.versions.2020]
url = "https://en-word.net/static/english-wordnet-2020.xml.gz"

It would be nice to just have ewn.2020, etc., for each version, but then version names are competing for the project namespace (meaning a version couldn't be called "label", "language", or "license", but we might want to expand that list in the future). Putting it in its own "versions" namespace lets it be anything.

Implement Morphy for English

"Morphy" is a very simple lemmatizer that is commonly included with English wordnet lookup. It should be included in this repository for completeness.

The definition of Morphy's behavior is here: https://wordnet.princeton.edu/documentation/morphy7wn

The NLTK has a Python implementation here, but I think we could follow the link above to recreate the algorithm from scratch. It would then be useful to look for differences with the NLTK. For instance, the NLTK has a check_exceptions parameter just checks if the form is in the list of exceptions (see Exception Lists in the Princeton link above), but we don't keep such lists.

Wordnet guide

This project could use a guide for the structure of wordnets. While this information is documented elsewhere, it would be nice to rephrase it around the interface that this module provides, such as the distinct synset structures per language and the purpose of having separate Synsets, Senses, and LexicalEntries (Words).

Add docstrings to the WordNet class

The following need docstrings:

  • wn.WordNet
    • WordNet.lgcode
    • WordNet.lexicons
    • WordNet.expanded_lexicons
    • WordNet.word
    • WordNet.words
    • WordNet.synset
    • WordNet.synsets
    • WordNet.sense
    • WordNet.senses

The docstrings of the following functions should be migrated to the corresponding methods in the Wordnet class and these should defer to those methods.

  • wn.word
  • wn.words
  • wn.sense
  • wn.senses
  • wn.synset
  • wn.synsets

E.g., for wn.words(), we might have:

def words(form: str = None,
          pos: str = None,
          lgcode: str = None,
          lexicon: str = None) -> List[Word]:
    """Return the list of matching words.

    This will create a :class:`WordNet` object using the *lgcode* and
    *lexicon* arguments. The remaining arguments are passed to the
    :meth:`WordNet.words` method.

    """

Installation and setup guide

There should be a guide that goes beyond the README in providing instructions to install Wn, to initialize the database and download/install wordnets, and to alter the default configuration (adding new projects versions, etc.).

NLTK wordnet -> wn migration guide

If this is to supplant the NLTK's module, it needs a clear migration guide. At least there should be a table describing similar operations:

Operation nltk.corpus.wordnet wn
Lookup Synsets by Word form wn.synsets("chat") wn.synsets("chat")
wn.synsets("chat", pos="v") wn.synsets("chat", pos="v")
Lookup Synsets by POS wn.all_synsets(pos="v") wn.synsets(pos="v")

Although it might make more sense to have separate tables for monolingual and multilingual operations.

Install LMF packages and collections

In addition to the XML files, it would be convenient if users could add "LMF packages" or "collections":

  • An LMF package is a directory containing exactly one WN-LMF XML file and (optionally) three files with metainformation:

    • README(.md|.txt|.rst)
    • LICENSE(.md|.txt|.rst)
    • citation.bib
  • An LMF collection is a directory containing one or more LMF packages. It can also take the optional metainfo files, but they would pertain to the collection and not any individual LMF file/package. A collection directory should not contain XML files

Packages and collections may be archived in a tarball, but only the outer directory (a collection should not contain tarred packages).

Add NLTK compatibility shim

To help users migrating from the NLTK, it might be useful to have a module that replicates the API of the NLTK, with the same argument names, default values, return values, etc.

Also see #18

Only add unique syntactic behaviors

The subcategorization frames of <SyntacticBehaviour> elements are highly redundant (there are only 39 unique frames of the thousands of instances). In a future version of LMF, these may be listed separately and referred to from elsewhere in the document. To ease this transition, only store the unique frames and reuse the identifiers in the database.

Model and build ILIs as distinct from lexicon

Anticipating a future (the future is now) where we have a versioned release of CILI, I would like to build the ILI database from that release and not pieced together from individual wordnets. This is where the ILI definition data would come from, and when reading an LMF, the ILIs associated with synsets would only be checked for validation (e.g., that all declared ILIs exist in the ILI table, otherwise throw a warning and ignore them).

This would require the release data to be published in some way. See globalwordnet/cili#4

Decouple resource index from software version

The download URLs of wordnet projects are stored in the wn._config.py source, but if a link goes down and we want to point to another URL, the user would need to update the code to get it. It therefore makes sense to have Wn download the index or config file that points to these things, so that they could later refresh these links without updating the code.

LMF 1.1

When the LMF 1.1 format has stabilized it should be loadable by the wn.lmf module. The module should continue to support LMF 1.0.

Changes include:

  • <LexiconExtension> under <LexicalResource>
    • <Extends>
    • <Requires>
    • <ExternalLexicalEntry>
    • <ExternalSense>
    • <ExternalSynset>
    • <ExternalLemma> ?
    • <ExternalForm> ?
  • <Requires> under <Lexicon>
  • logo attribute on <Lexicon>
  • <Pronunciation> under <Lemma> and <Form>
  • subcat attribute on <Sense>
  • members attribute on <Synset>
  • lexfile attribute on <Synset>
  • id attribute on <SyntacticBehaviour>
  • Additional sense relation types (are these final?)
    • simple_aspect_ip
    • secondary_aspect_ip
    • feminine_form
    • has_feminine_form
    • masculine_form
    • has_masculine_form
    • young_form
    • has_young_form
    • diminutive
    • has_diminutive
    • augmentative
    • has_augmentative
    • anto_gradable
    • anto_simple
    • anto_converse
  • Additional synset relation types (are these final?)
    • feminine_form
    • has_feminine_form
    • masculine_form
    • has_masculine_form
    • young_form
    • has_young_form
    • diminutive
    • has_diminutive
    • augmentative
    • has_augmentative
    • anto_gradable
    • anto_simple
    • anto_converse
    • ir_synonym

implement lowest_common_hypernyms()

Also called least_common_subsumer(). I think this is only defined for hypernyms + instance_hypernyms, but a generalized solution could possibly cover other hierarchical relations like meronymy.

Add depth functions

"Depth" is defined in terms of the distance from a synset to a root via hypernyms. Since there can be multiple hypernym paths to a root, there are separate notions of a max-depth and min-depth.

Form-normalization strategies

There needs to be a strategy for morphological normalization, for instance the use of Morphy in English-language lookup. Some ideas:

  • a form-lookup table separate from the actual database
    • irregular forms
    • transliterations
    • case-folded
    • diacritics removed
  • a normalization function that tries to get the form stored in the actual database

lgcode is not filtering as intended

It seems to do nothing when the lgcode is not used by any lexicon.

>>> wn.synsets('inu', lgcode='ja')  # this is the correct one
[Synset('wnja-02084071-n'), Synset('wnja-10641755-n')]
>>> wn.synsets('inu', lgcode='jp')  # this is not actually a lgcode
[Synset('wnja-02084071-n'), Synset('wnja-10641755-n')]
>>> wn.synsets('inu', lgcode='jpn')  # nor is this
[Synset('wnja-02084071-n'), Synset('wnja-10641755-n')]
>>> wn.synsets('inu', lgcode='fr')  # this is a real one that doesn't have the word
[]

Translate synsets across lexicons via ili

Since synsets are not shared across lexicons, there needs to be a simple way to traverse the ILI to get to equivalent synsets in some other lexicon. The implementation is simple, but I'm not sure what to call it. Synset.translate()?

Contributor documentation

Setup a CONTRIBUTING.md file with things like:

  • How to get help
  • How to file issues
  • How to build and test
  • How to build documentation
  • Versioning scheme (semver)
  • Branching scheme (GitHub Flow)
  • Changelog scheme (https://keepachangelog.com/, mostly)

Order synsets by sense enumeration

The synsets in the LMF file for some word don't always appear in the order given by their sense list on the lexical entry, but the current search ordes by <Synset> appearance. The senses are only used when searching for synsets by the word form, so maybe two kinds of queries need to be written.

Add command-line usage

Some tasks, such as adding a new wordnet, are trivially done via the command line:

$ python -c 'import wn; wn.download("ewn:2020")'

But this isn't very user friendly:

  • The -c option to Python may be less well-known than -m
  • There's no argparse help for incorrect commands
  • Running a query like wn.synsets(...) won't print anything unless the user does print(wn.synsets(...))
  • Users must be careful about mixing quotes inside the string

Adding a __main__.py file with a basic command-line interface could help here. I'm not yet sold on making a wn command, as that's the name of the Princeton wordnet utility.

The command-line interface would be convenient with subcommands:

$ python -m wn download ewn:2020
$ python -m wn add ../odenet/odenet/wordnet/deWordNet.xml
$ python -m wn words --lgcode=en cat
$ python -m wn words --lgcode=en cat --translate=ja

Recreate simulate_root functionality from NLTK

At first I hoped to avoid things like simulate_root, but I think that some people may depend on that functionality so it should probably go in.

The fake root synset will be an empty synset without an ILI, but pos is required, so it might make sense to just choose the pos of the synset needing the fake root (e.g., the one requesting the hypernym paths).


Database possibly not being initialized on Windows

It has been reported that Wn does not seem to initialize the database if the file is missing. See below:

>>> import wn
>>> for lexicon in wn.lexicons():
...     print (lexicon.id, lexicon.version, lexicon.label)

Running the above code without having a database led to the following error (username redacted):

  File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python39\lib\site-packages\wn\_db.py", line 109, in _connect
    conn = sqlite3.connect(dbpath)
sqlite3.OperationalError: unable to open database file

I am unable to reproduce this on Linux, but there might be some kind of permissions issue that prevents the database from being created.

Lookup synsets via ILI

The ILI ids are indexed on synsets, so it would be trivial to look them up this way. I guess this would be a parameter on WordNet.synsets().

List installed lexicons

A user may wish to view which projects and lexicons are currently installed. There should be a public API function to accomplish this.

Function to find root synsets

Finding root synsets is not simple. We could just look for all synsets that don't have any hypernym or instance hypernym relations, which would be fast if done on the database, but this does not cover lexicons whose relations come from another resource. It is simple to do this in the regular API:

[ss for ss in wordnet.synsets() if not ss.hypernyms()]

... but this is very slow due to all the separate DB hits. So it seems we need to write the expanded-relation traversal logic in the DB side of things.

Index Princeton WordNet files

The Princeton WordNet 3.0 (and maybe 3.1) is still used as the base for many other projects. It would therefore be good to distribute a WN-LMF version of these lexicons.

Add docstrings to the Word class

The wn.Word class and its methods and attributes do not have any docstrings. They need at least a 1-line summary, but examples are also nice.

  • wn.Word
    • lemma
    • forms
    • senses
    • synsets
    • derived
    • translate
    • lexicon

Feature to modify wordnets in the database

Updated:

This issue is for tracking the feature for modifying wordnets in the database through Wn. Currently the feature has low priority and won't be implemented unless there's a need.

Anyone who wants this feature please read the following:

If you have a use case where the lack of modifiable wordnets in Wn is holding you back, please:

  1. Explain your situation in a comment
  2. Indicate if you are a wordnet author/lexicographer or have some other role

Original issue text:

For example, add, modify or delete words, senses, synsets or relations, ...

wn.download(url) no longer works

The recent changes to allow WN-LMF packages and collections seems to have broken wn.download() with a URL argument. The reason is that wn.add() (used by wn.download()) expects a plaintext XML file to have a .xml suffix whereas before it didn't check the suffix, and the downloaded files' filenames are a hexadecimal hash. One solution is to put .xml at the end of the filename (assuming it was originally). Another is to make the hashed path into a directory and download the file into the directory with its original filename (if available), thus making it a de facto package.

Add docstrings to the Sense class

The wn.Sense class and its methods and attributes do not have any docstrings. They need at least a 1-line summary, but examples are also nice.

  • wn.Sense
    • word
    • synset
    • get_related
    • get_related_synsets
    • translate
    • closure
    • relation_paths
    • lexicon

Add path similarity

The path-similarity of two synsets, in the NLTK, is 1 / (shortest-path-distance + 1). If there is no path, it is None (or should it be infinity?).

This depends on #24.

Add docstrings to the Synset class

The wn.Synset class and its methods and attributes do not have any docstrings. They need at least a 1-line summary, but examples are also nice.

  • wn.Synset
    • empty
    • definition
    • examples
    • senses
    • words
    • lemmas
    • get_related
    • hypernym_paths
    • min_depth
    • max_depth
    • shortest_path
    • common_hypernyms
    • lowest_common_hypernyms
    • holonyms
    • meronyms
    • hypernyms
    • hyponyms
    • translate
    • closure
    • relation_paths
    • lexicon

Use int for ILI ids internally

ILIs in WN-LMF are strings like i123 and have some special values like the empty string for a synset without an ILI and "in" for a proposed ILI. The special values don't go into an ILI table like the actual values, and the actual values are integers if you ignore the "i" prefix, so they can be used directly in the database as rowids. If this is safe to do, then it could save some lookups and some space in the database.

Add function to compute the shortest hypernym path

In theory there could be a shortest path of any relation, but it seems to be most pertinent to hypernyms.

The NLTK has Synset.shortest_path_distance(), although the function to actually get the shortest path is non-public.

Add information content (IC)

Three of the similarity measures require information content to work. The IC that is shipped with the NLTK's wordnet data is based on synset offsets, so those will need to be mapped somehow to something that this module uses.

Shortcut to list word forms

The NLTK's synset.lemma_names() is convenient for getting just the word forms for each entry (lemma) in the synset. I think we can just say Synset.lemmas() to do the same as [w.lemma() for w in synset.words()]. If someone actually wanted all word forms, then the [form for w in synset.words() for form in w.forms()] comprehension works well enough, but if there's a need there could be a Synset.word_forms() method.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.