Giter Club home page Giter Club logo

python-qinfer's Introduction

Welcome to QInfer

image

Launch Binder

image

image

image

Code Climate

QInfer is a library using Bayesian sequential Monte Carlo for quantum parameter estimation. Works with Python 2.7, 3.3, 3.4 and 3.5.

Installing QInfer

We recommend using QInfer with the Anaconda distribution. Download and install Anaconda for your platform, either Python 2.7 or 3.5. We suggest using Python 3.5, but QInfer works with either.

If using Anaconda, you should go ahead now and install from their repository all the dependencies that you can. If you are using "regular" Python then you can ignore this step. Replace python=3.5 with your version (typically either 2.7 or 3.5).

$ conda install python=3.5 numpy scipy matplotlib scikit-learn

If you are not using Anaconda, but are instead using "regular" Python, and you are on Linux, you will need the Python development package:

$ sudo apt-get install python-dev

Where python-dev might be python3.5-dev depending on your package manager and which version of Python you are using.

The latest release of QInfer can now be installed from PyPI with pip:

$ pip install qinfer

Alternatively, QInfer can be installed using pip and Git. Ensure that you have Git installed. On Windows, we suggest the official Git downloads. Once Anaconda and Git are installed, simply run pip to install QInfer:

$ pip install git+https://github.com/QInfer/python-qinfer.git

Lastly, QInfer can be installed manually by downloading from GitHub, then running the provided installer:

$ git clone [email protected]:QInfer/python-qinfer.git
$ cd python-qinfer
$ pip install -r requirements.txt
$ python setup.py install

More Information

Full documentation for QInfer is available on ReadTheDocs, or may be built locally by running the documentation build script in doc/:

$ cd /path/to/qinfer/doc/
$ make html

On Windows:

C:\> cd C:\path\to\qinfer\
C:\path\to\qinfer\> make.bat html

The generated documentation can be viewed by opening doc/_build/html/index.html.

python-qinfer's People

Contributors

cgranade avatar csferrie avatar dsuess avatar ihincks avatar jarthurgross avatar michalkononenko2 avatar scasagrande avatar taalexander avatar ysanders avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-qinfer's Issues

Old Distributions

There are a couple of distributions in qinfer/distributions.py that I think have been superceded by new ones in qinfer/tomography/distributions.py and should be deprecated:

  • HaarUniform
  • GinibreUniform
  • HilbertSchmidtUniform

Update installation instructions to point to Anaconda, git+https

As pointed out recently, the installation instructions are rather out of date, and should be updated. In particular, using git clone with a git+ssh URL currently requires that a user have their SSH pubkeys registered with GitHub, such that we should instead list an HTTPS url for cloning. We could also minimize the number of required package installs (SciPy, Matplotlib, etc.) by recommending users install a scientific Python distribution such as Anaconda--- we could perhaps also mention Enthought Canopy as an unsupported but potentially-useful alternative.

Bug in `qinfer.utils.in_ellipsoid` ?

Hello,
I've been playing with QInfer for it's minimum volume enclosing_ellipse algorithm. It seems there is some bug with the in_ellipsoid function, for which no point used to compute the ellispoid are inside it.

Here's a snippet with the data I used. I have no idea whether it makes sense, but using A instead of it's inverse in the distance computation seem to give a better fit distance (although some points would still be outside of the ellispe).

import numpy as np
import qinfer

# 7 points in 8 dimensions
points = np.array([[9.61671088e+01, 3.09532270e+00, 3.59059099e+00, 3.52696730e+00,
        4.68305129e+00, 4.92967759e+01, 1.52589430e+00, 1.77004559e+00],
       [1.73990050e+02, 2.98332674e+00, 4.39375052e+00, 2.91588476e+00,
        4.07209379e+00, 3.55407545e+01, 1.06029683e+00, 1.56157208e+00],
       [2.37193968e+02, 9.17352808e-01, 1.35086160e+00, 7.16079767e-01,
        1.02764937e+00, 2.94583624e+01, 2.70237115e-01, 3.97941707e-01],
       [1.47826437e+02, 1.08761428e+00, 1.75354413e+00, 1.32319382e+00,
        2.81922693e+00, 3.98934693e+01, 4.33887070e-01, 6.99549591e-01],
       [6.32894555e+02, 2.45578343e+00, 3.75805515e+00, 8.68867797e+00,
        1.14160928e+01, 1.35298152e+01, 3.32262960e-01, 5.08457917e-01],
       [2.01686471e+02, 1.55523380e+00, 1.76140513e+00, 1.37662800e+00,
        1.55630744e+00, 3.27865510e+01, 5.09907521e-01, 5.77503991e-01],
       [1.86267513e+02, 2.65450049e+00, 3.72240851e+00, 3.18004405e+00,
        5.36798721e+00, 3.41711622e+01, 9.07073667e-01, 1.27199025e+00]])
A, c = qinfer.utils.mvee(points, 0.001)
print("in_ellipsoid?", [qinfer.utils.in_ellipsoid(p, A, c) for p in points])
print("distance", [
    np.einsum('j,jl,l', c-p, np.linalg.inv(A), c-p)
    for p in points
])
print("distance fixed?", [
    np.einsum('j,jl,l', c-p, A, c-p)
    for p in points
])

Output:

in_ellipsoid? [False, False, False, False, False, False, False]
distance [13612175772.118567, 5243680294.202738, 1343921122.9179094, 7623619437.438548, 34964549033.975334, 3221636117.3242393, 4282303012.8117304]
distance fixed? [1.140625, 0.6796875, 0.71484375, 1.26025390625, 0.3359375, 0.94921875, 0.931640625]

Is this project dead?

I haven't gotten feedback on this project in 2 months. I can understand if this project is no longer in development, but I feel that qinfer does not yet have the web utilities needed in order to survive archiving.

It would be a good idea to pip freeze the project in a virtual environment, and use requires.io to track dependencies in the project. This would mean that even if one of our libraries were to be superceded by a major release, we wouldn't have any broken code. The repo would, however, get a notice and a happy little badge that would state that the code is out of date.

Create new branch for Generalized Outcomes

As most Qinfer developers know, myself and @ihincks have been working on extending the features and ideas of Qinfer to a more general class of probability distributions as discussed in #66. My forks branch is now at what I believe is a reasonable stage to begin the process of code analysis, and review for eventual incorporation into the main branch.

The main core functionality of SMCUpdater's particle filter generalises to arbitrary domain probability distributions quite naively. The difficulty in modifying the QInfer code to these new domains has mostly been in a variety of metrics that involve taking expectation values over outcomes such as smc.SMCUpdater.bayes_risk,smc.SMCUpdater.expected_information_gain, and abstract_model.DifferentiableModel.fisher_information. We have attempted to solve these issues primarily with MCMC integration techniques. I will provide a document outlining these procedures at a later time. All tests are currently passing, is merged and up to date with the master branch,all code is commented, and I am in the process of using this branch in my own work for data analysis.

I believe at this stage it would be good to create a new branch in this repository to pull my fork into in order to commence with review, and implement the necessary changes to bring it into line in a supervised manner.

Ps. @cgranade sorry for taking so long to get around to this. There was always one more thing to do...
-Thomas

Plotting Upgrade

It might be worthwhile looking into doing our plots through seaborn...they have some nice stuff.

See this for example, which is only four lines of code.

Truth value of an array with more than one element is ambiguous.

Hi all, I'm getting an error with qinfer that I don't know what to do with. Sometimes this gets thrown, sometimes not. I was wondering if someone could have a look at it. I am using the TomographyModel with my own heuristic.

Traceback (most recent call last):
  File "qinfer_test.py", line 69, in <module>
    tally[detect(true_state)] += 1
  File "qinfer_test.py", line 58, in detect
    updater.update(data, experiment)
  File "/home/jayce/.local/lib/python3.5/site-packages/qinfer/smc.py", line 425, in update
    weights, norm = self.hypothetical_update(outcome, expparams, return_normalization=True)
  File "/home/jayce/.local/lib/python3.5/site-packages/qinfer/smc.py", line 365, in hypothetical_update
    L = self.model.likelihood(outcomes, locs, expparams).transpose([0, 2, 1])
  File "/home/jayce/.local/lib/python3.5/site-packages/qinfer/tomography/models.py", line 217, in likelihood
    return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, 1 - pr1)
  File "/home/jayce/.local/lib/python3.5/site-packages/qinfer/abstract_model.py", line 675, in pr0_to_likelihood_array
    for idx in range(safe_shape(outcomes))
  File "/home/jayce/.local/lib/python3.5/site-packages/qinfer/abstract_model.py", line 675, in <listcomp>
    for idx in range(safe_shape(outcomes))
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

plot_rebit_posterior raises TypeError

In plot_rebit_posterior, the default setting is to call plot_cov_ellipse, which uses np.linalg.eigh to find eigenvectors of the covariance. This can return vecs with complex entries, which then fail on line 146 of plotting_tools.py:

theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))

Should it just take the real part? as in:

theta = np.degrees(np.arctan2(*vecs.real[:,0][::-1]))

Parallel serial threshold incorrect

I would just submit a PR, but am a bit confused about which branch to do it on at this point. Anyway, around line 180 of parallel.py, it should be index 0 I think:

if modelparams.shape[*0*] <= self._serial_threshold:

(also fix theshold typo while at it)

Resamplers Have No Underlying Abstract Class

LiuWestResampler and ClusteringResampler currently do not have an underlying AbstractClass. This will raise issues with future modifications such as changing the n_particles being re-sampled. A simple ABC should probably be implemented in the future.

Update readme re:python3

Your Python 3 stuff in the readme is out of date seeing how that the python3_support branch was merged :)

Remove n_outcomes restriction

Just want to put this here, not necessarily because it is urgent, but because it has come up as a problem for a few people.

Models have a method called n_outcomes which specifies the number outcomes in an array of expparams. Sometimes models do not have a finite number of outcomes. For example, measurement of NV centers involves drawing Poisson random variates, which has an infinite but discrete number of outcomes. Or, measurements in NMR are continuous valued.

Correct me if I am wrong, searching all source files as of 2c86c94 for the string n_outcomes, the only file in which it is being used rather than being defined is in the updater code of smc.py.

Liu-West Resampler returns wrong shape weights when changing n_particles

In Line 358 of resamplers.py, the new weights array to be returned is calculated as np.ones((w.shape[0],)) / w.shape[0], where w is the weight array for the original (pre-resampling) particle approximation. This fails, however, if the resampler is called with an explicit n_particles argument.

SMCUpdaterBCRB Should Have Properties for current_bim and adaptive_bim

relevant variables are difficult to find in SMCUpdaterBCRB, and understand how to use without digging through the source code if they are only class variables. Making them properties would allow documentation and better readability.

    # Before we update, we need to commit the new Bayesian information
    # matrix corresponding to the measurement we just made.
    self.current_bim += self.prior_bayes_information(expparams)[:, :, 0]

    # If we're tracking the information content accessible to adaptive
    # algorithms, then we must use the current posterior as the prior
    # for the next step, then add that accordingly.
    if self._track_adaptive:
        self.adaptive_bim += self.posterior_bayes_information(expparams)[:, :, 0]

    # We now can update as normal.
    SMCUpdater.update(self, outcome, expparams)

If you think this is a problem, I can fix and submit a PR

-Thomas

Issues plotting covariance estimation_results['posterior'].plot_covariance()

When trying to plot covariance I ran into the following issue

~\AppData\Local\Continuum\Anaconda3\envs\qsharp-samples\lib\site-packages\qinfer\smc.py in plot_covariance(self, corr, param_slice, tick_labels, tick_params)
   1159         cov = self.est_covariance_mtx(corr=corr)[param_slice, param_slice]
   1160 
-> 1161         retval = mpls.hinton(cov)
   1162         plt.xticks(*tick_labels, **(tick_params if tick_params is not None else {}))
   1163         plt.yticks(*tick_labels, **(tick_params if tick_params is not None else {}))

~\AppData\Local\Continuum\Anaconda3\envs\qsharp-samples\lib\site-packages\mpltools\special\hinton.py in hinton(inarray, max_value, use_default_ticks)
     54 
     55     ax = plt.gca()
---> 56     ax.set_axis_bgcolor('gray')
     57     # make sure we're working with a numpy array, not a numpy matrix
     58     inarray = np.asarray(inarray)

AttributeError: 'AxesSubplot' object has no attribute 'set_axis_bgcolor'

https://github.com/QInfer/python-qinfer/blob/e90cc57d50f1b48148dbd0c671eff6246dda6c31/src/qinfer/smc.py

Recent versions of matplotlib have replaced it by set_facecolor. Mpltools uses the old attribute

LaTeX documentation build failure.

When using Sphinx to build the PDF User's Guide, the compilation fails due to the API documentation for the new in_credible_region:

Underfull \hbox (badness 10000) in paragraph at lines 6178--6180
[]\T1/pcr/b/n/10 level \T1/ptm/m/n/10 ([][]\T1/pcr/m/sl/10 float[][]\T1/ptm/m/n
/10 ) -- The de-sired cred-i-b-lity level (see

! LaTeX Error: Too deeply nested.

See the LaTeX manual or LaTeX Companion for explanation.
Type  H <return>  for immediate help.
 ...

l.6186 \item {} \begin{description}

Change the parent class of SMCUpdater

A bunch of the functionality of SMCUpdater is not directly related to the SMC algorithm itself. For example, some methods (ex. sample, est_mean()) could be included in a new ParticleDistribution distribution (remember that SMCUpdater inherits from Distribution).

I bring this up because I am considering different updaters which wouldn't naturally inherit from SMCUpdater, but would benefit from large chunks of the code.

TomographyModel expparams_dtype broken in Python 2

The Python 3 port at one point required unicode_literals to be turned on for tomography/models.py, landing us right in the fallout from numpy/numpy#2407. As a result, TomographyModel.expparams_dtype is not understood by np.empty, np.array and friends:

TypeError                                 Traceback (most recent call last)
<ipython-input-82-aeaa035a166b> in <module>()
      1 updater = qi.SMCUpdater(model, 20000, prior)
----> 2 expparams = np.empty((1,), dtype=model.expparams_dtype)

TypeError: data type not understood

Until we can fix this, a work around is to manually creating expparam arrays with field names specified as str.

Warning: Could not import IPython parallel

I am getting the following warning when I type from qinfer import simple_est_prec.

/home/shehab/anaconda2/envs/py36/lib/python3.6/site-packages/IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated since IPython 4.0. You should import from ipyparallel instead.
  "You should import from ipyparallel instead.", ShimWarning)
/home/shehab/anaconda2/envs/py36/lib/python3.6/site-packages/qinfer/parallel.py:52: UserWarning: Could not import IPython parallel. Parallelization support will be disabled.
  "Could not import IPython parallel. "

Travis CI DOCTEST=1 build broken

As encountered in the CI builds for #106, upstream changes in QuTiP have prevented doctest builds from running correctly. I think it makes sense to switch the doctest environment to install the released QuTiP version instead of the one from their Git master, now that the released version has everything we need for tomography support.

Region estimation troubles

This applies to the two methods region_est_hull and regioun_est_ellipsoid in SMCUpdater.

I get a generic Qhull error when I try to do it.

If you check out the documentation for Delaunay triangulation, you can see there is some kind of switch for ndim>4 in terms of passing arguments to the qhull C++ library.

IPython.parallel deprecated

I just updated my Anaconda 2.7 and installed qinfer, and got the following warnings:

.../IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated. You should import from ipyparallel instead.
  "You should import from ipyparallel instead.", ShimWarning)
.../python-qinfer/src/qinfer/parallel.py:52: UserWarning: Could not import IPython parallel. Parallelization support will be disabled.
  "Could not import IPython parallel. "

Doesn't seem to be a big deal, but since I didn't see anything about it in the other issues I thought I'd bring it up.

Canonicalize after update_timestep?

We may want to consider the default behaviour (assuming _canonicalize is True) of SMCUpdater.update calling canonicalize on the modelparams after a non-trivial application of update_timestep.

Feature: model parameter reparameterizations

Sometimes the natural parameterization of your model and the parameterization that is best for sampling do not agree. There are a number of reasons this might be the case, off the top of my head:

  • distributions are more similar to gaussian (which Liu-West likes) in unnatural parameterization (a beta distribution does not look gaussian in many regimes, but reparametrized through a sigmoid logistic it does)
  • distributions are less correlated in unnatural parameterization
  • model parameters are easier to bound in unnatural parametrization
  • canonicalize and are_models_valid can be made easier to implement
  • timestep updates are simpler to implement

MCMC libraries such as stan have built in functionality to reparameterize bounded parameters and objects such as simplices.

I was originally thinking this would make sense as a DerivedModel. After some more careful consideration, I think it makes much more sense as an abstract class ModelReparameterization with methods to_natural and from_natural. Instances of such are given to an instance of SMCUpdaterReparameterized which is subclassed from SMCUpdater.

Let me know what you think.

Travis CI, Coveralls, and Unit Testing

I was thinking we could improve the quality of this library in general by putting it on Travis CI. In an effort to understand some of the code here, I took it upon myself to write unit tests for it, and put it on Travis to get build data and Coveralls to get code coverage data. This will also help us support Python 3 as well. Furthermore, if we want to contribute some of this code to qutip, we should have the right tools in place to assure code quality.

Can I contribute to this project by merging in my build environment? Is this something that should be considered as the future of this project?

Feature: duecredit integration

Since we now have a paper up on arXiv and Zenodo-issued DOIs, we may want to use duecredit to embed metadata for those citations. As a part of that, it might be worth filing a PR with duecredit itself to add injections for other dependencies. If we do decide to add duecredit annotations, we could possibly use their tags functionality to separate theory and implementation citations. Thoughts?

Domains Errors

Currently domains will fail as min/max values are by default set to None when bounds have not been set. This will cause various methods to fail either from attempting to cast a None type to int or comparing against a None type. This may be simply fixed by setting the bounds to +-np.inf

Importing from qinfer.tests

I want to do this:

from qinfer.tests import test_model

But qinfer.tests is not in the __init__. This particular function (test_model) is meant to be used by users on custom models. Should it be moved, imported specially, or something else?

Liu-West resampling: Infinite error in covariance estimation

When covariance matrices are dominated by uncertainty in a low-dimensional subspace, scipy.linalg.sqrtm can sometimes report infinite error even if n_ess is large. I suspect that this can be solved by replacing sqrtm with a eigendecomposition that uses the hermicity of the covariance matrix, but this should be tested for accuracy.

Resample forces particle weights to uniform

In smc.py in the resample function new particles, and weights are requested from the resampler. However, several lines down the new weights are set to be uniform. This behaviour seems undesirable as the author of new resampler would expect their returned weights would become the new updater weights. For example Liu-West Resampler should just return the uniform weights itself.

    self.particle_weights, self.particle_locations = \
        self.resampler(self.model, self.particle_weights, self.particle_locations)

    # Possibly canonicalize, if we've been asked to do so.
    if self._canonicalize:
        self.particle_locations[:, :] = self.model.canonicalize(self.particle_locations)

    # Reset the weights to uniform.
    self.particle_weights[:] = (1/self.n_particles)

I will submit a small patch removing the above line.

Make underlying_model a property of abstract_models.Model whose value is self

One problem that I have run into several times is trying to access some property of a model hidden under a couple layers of derived/parallel models, but not knowing exacty how many layers deep I need to go because I am trying to keep my code general. This could be solved, I think, by adding the base case described in this issues' title; each model points to a model underneath until it reaches the bottom.

Convert examples to IPython Notebooks

The example code can be converted to use IPython Notebook, so that we can document, embed plots, etc. This would also let us remove docopt from the repo, which would be a nice thing.

Expose common functionality in qinfer/__init__.py.

Currently, it takes several different import statements to pull in a reasonable subset of QInfer. The division into smc, abstract_model, etc. is useful in development and in disambiguating, but is rather useless to the user. Listing common functionality in qinfer/__init__.py should help with this, so that you can do something like the following:

>>> import qinfer as qi
>>> model = qi.SimplePrecessionModel()
>>> prior = qi.UniformDistribution([0, 1])
>>> updater = qi.SMCUpdater(model, 1000, prior)

Amplitude estimation of Ramsey fringe

Hi,
I am trying to modify the tutorial of simple_precession_estimation as in
https://github.com/QInfer/qinfer-examples/blob/master/simple_precession_estimation.ipynb
to estimate the amplitude of a Ramsey fringe. The model is set up to estimate two parameters, the amplitude and offset of the sigmal (signal = a * np.cos(true_omega * ts + phase) + offset) with omega and phase as fixed parameters.

I encoutnered two problems:
a. I used the method 'are_models_valid'of Model

def are_models_valid(self, modelparams):
    """

    Args:
        modelparams:

    Returns:

    """
    a = modelparams[:, 0]
    offset = modelparams[:, 1]
    return np.logical_and(-a + offset > 0, a + offset < 1)

to limit the range of parameters, which does not seem to take any effect and the algorithm will throw errors. I end up to limit the parameter range by using a post-selected distribution as the prior

   prior =  PostselectedDistribution(
    UniformDistribution(
        [
            [0.30, 0.5],
            [0.30, 0.5]
         ]),
       model,
       maxiters=10000
    )

Not sure if this is the proper way to use the package from the point of view of package designer.

b. With this modifications I could get the algorithm running. As the I run bootstrap on the model, the inferred amplitudes show bias towards smaller amplitude if the "true" amplitude is close to the maximum value 0.5. Is that the expected behavior? I expect the bayesian method to take this kind of bias out.

The full code can be found in this notebook https://github.com/Justin318/test_qinfer_share/blob/master/simple_est_amplitude.ipynb

Thanks for your time.

ProductDistribution needs to accept more than two factors.

Currently, ProductDistribution allows for comining exactly two distributions to make an uncorrelated joint distribution. This is inconvienent, however, for priors which represent uncorrelated samples from more than just two random variables. As a workaround, @ihincks has pointed out that reduce can be used, but this has the undesired side effect of making a nest of Distrubtion objects that must be called with each call to the outer object's sample() method.

Feature: return status

I am running SMC (with separate SMCUpdaters) on lots of independent data sets in a loop. Some of them run into problems (n_ess is smaller than 10). It would be nice if the updater carried an attribute that flagged this sort of behaviour so that one doesn't have to manually wade through stdout and see which ones had warnings.

This could be fancy (new updater status class or something), or just a bool, no_problems=True. I'm okay with either.

Custom loss functions for perf_test and perf_test_multiple

Currently the 'loss' field collected by perf_test and perf_test_multiple is always taken to be the quadratic loss, but in many cases it's worthwhile to consider other loss functions (such as those given by Model.distance or by canonicalization). Perhaps we should add another kwarg to allow customizing how losses are recorded?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.