Giter Club home page Giter Club logo

ing-bank / popmon Goto Github PK

View Code? Open in Web Editor NEW
493.0 493.0 33.0 5.91 MB

Monitor the stability of a Pandas or Spark dataframe ⚙︎

Home Page: https://popmon.readthedocs.io/

License: MIT License

Python 88.69% Jupyter Notebook 8.27% CSS 0.35% JavaScript 0.69% HTML 1.92% Makefile 0.02% Batchfile 0.06%
covariate-shift data-analysis data-distributions data-profiling data-science dataset-shifts drift-detection hacktoberfest ing-bank ipython jupyter mlops monitoring pandas population-monitoring python spark statistical-process-control statistical-tests statistics

popmon's People

Contributors

actions-user avatar alexflex47 avatar dependabot-preview[bot] avatar mbaak avatar pradyot-09 avatar pre-commit-ci[bot] avatar rurlus avatar ruudkassing avatar sbrugman avatar scarozza avatar tomcis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

popmon's Issues

html report doesn't display any graphs

The report from popmon.df_stability_report doesn't show properly in the browser, probably due to an html error. The information inside the StabilityReport.datastore attribute seems to be correct, but the html file produced by StabilityReport.to_file shows as below:

image

The code used to generate that report is contained in examples/flight_delays.py, I have not modified anything. I have used python 3.8 and 3.10, and popmon==1.4.4

Plotting 2D histograms

popmon is able to generate 2D histograms for feature interactions, however currently is not able to include plots. The relevant code to do so is here.

Cant run the example

I install with a fresh environment - python 3.7 and ran any of the examples but I have the following error when generating reports:

$ conda create -n pop python=3.7
$ conda activate pop
$ pip install popmon
TypeError: isinstance() arg 2 must be a type or tuple of types

Update example reports in README

The example HTML reports in the README are outdated and should be updated. Preferably, these are automatically updated on each release.

List the available comparisons/statistics in the documentation

From the readme/docs, it's not directly obvious which comparisons and statistics are already implemented in popmon (and which ones one could contribute).

To get started, here are some starting points:

What to do

  • The Readme should contain a shortlist containing a couple of the most well-known from both categories, and a link to the docs.
  • The documentation should contain a page with the complete list. This page should link to how to contribute a new comparison/metric.

Metrics pipeline (erroneously) overwrites reference type to "self"

When calling pm_stability_metrics popmon attribute on a Pandas dataframe we noticed that a TypeError is raised for SelfReferenceMetricsPipeline:

TypeError("__init__() got an unexpected keyword argument 'ref_hists_key'")

Which is surprising because we set reference_type="external", so we expect popmon to create an ExternalReferenceMetricsPipeline object.

After some investigation it turns out that in popmon->pipeline->metrics.py->stability_metrics() on line 54-58 you set kwargs["ref_hists_key"] = "ref_hists" because the reference type is set to "external". Then, you create the metrics pipeline and pass the settings and kwargs. However, in popmon->pipeline->metrics_piplines.py->create_metrics_pipline() it uses a kwarg reference_type with as default "self". As a result, the pipeline class is created using the kwarg reference_type instead of via the setting settings.reference_type. Therefore, whenever creating the stability metrics report from a Pandas dataframe the reference type is always "self", without control over it.

I guess this is some leftover kwarg that was forgotten to migrate to the new Settings class after the recent settings migration of popmon. Could you please fix this issue as it's preventing us from updating from popmon to more recent stable versions (1.4).

add license header to all source files

It's good practice to use a license header in all files (not per project) to ensure that it's clear what is covered by which license.

The ci/cd pipeline can be used to check for these headers.

DataProfiler - A Scalable Data Profiling Library

Howdy!

I'm reaching out as a maintainer of the DataProfiler library.

I think it might be useful to your project so I'm reaching out! Would love to collaborate and see how we can help popmon.

We effectively wrote a library to improve upon the objectives of pandas-profiling with some neat added functionality:

  • Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL data = Data("your_filepath_or_url.csv")
  • Profile data: calculating statistics and doing entity detection (for PII) profile = Profiler(data)
  • Merge profiles: profile3 = profile1 + profile2; enabling distributed profile generation
  • Compare profiles: profile_diff = profile1.diff(profile2)
  • Generate reports: readable_report = profile.report(report_options={"output_format": "compact"})
import json
from dataprofiler import Data, Profiler

data = Data("your_file.csv") # Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL

print(data.data.head(5)) # Access data directly via a compatible Pandas DataFrame

profile = Profiler(data) # Calculate Statistics, Entity Recognition, etc

readable_report = profile.report(report_options={"output_format": "compact"})

print(json.dumps(readable_report, indent=4))

Error when stitching histograms

Discussed in #142

Originally posted by jeaninejuliettes September 29, 2021
Hello,

I'm receiving an error when using stitch_histogram and I'm not sure what I'm doing wrong, hope anyone can help me. The error I get is: ValueError: Request to insert delta hists but time_bin_idx not set. Please do.

The steps I take:

I start with creating a histogrammar object of the original dataframe

hists = df.pm_make_histograms()
bin_specs = popmon.get_bin_specs(hists)

later on I receive a new batch of data, which I add to my existing histograms

new_hists = [new_df.pm_make_histograms(bin_specs=bin_specs)]
hists_2= popmon.stitch_histograms(hists_basis=hists, hists_delta=new_hists, time_axis="batch")

so far so good, but when I try to repeat these steps with yet another new batch of data, I receive the error

new_hists_2 = [new_df_2.pm_make_histograms(bin_specs=bin_specs)]
hists_3 = popmon.stitch_histograms(hists_basis=hists_2, hists_delta=new_hists_2, time_axis="batch")

Is it not possible to stitch another histogram again? If not, I've found a bit of a cumbersome way to decide on what a good value for my time_bin_idx is. It works so far, but I'm expecting it too fail with other data (or not to work as expected). The way I define the time_bin_idx value is:
int(np.ceil(max(hists_2[next(iter(hists_2))].bin_centers()) + 1))

Hopefully you can point me in the right direction. Thanks!

Text occluded in categorical histograms

When the values in a categorical column are longer, they are cut off the plot, making it difficult to interpret. The plots should be fixed to include text values (up to a certain length).

Improve user-friendliness for incremental datasets

Incremental datasets can rely on a batch ID and stitching of histograms. Compared to working with time slices, the incremental dataset is much less user friendly:

  • The labeling of histograms is awkward (the bin center is 1.5, 2.5 etc.)
  • The documentation for entry-level usage (i.e. a single column with a "batch_id" column) is lacking.
  • The pm_stability_report docstrings/arguments are tailored towards time-slices.

This user experience for this way of using popmon should be improved.

Integration with Grafana

Access to the datastore means that its possible to integrate popmon in almost any workflow. To give an example, one could store the histogram data in a PostgreSQL database and load that from Grafana and benefit from its visualisation and alert handling features (e.g. send an email or slack message upon alert).

Rolling reference comparisons

A wide variety of references is provided by popmon out-of-the-box. A reference may be static (a fixed training set, or the current dataset itself for exploratory data analysis) or dynamic (sliding or growing as more data becomes available). The reference is compared against batches, and they can be sequential (batched) or sliding (rolling).

Popmon should enable all combinations, and currently lacks external reference + rolling comparison.

Reference Compare to Implemented
Self-reference Static Self (batched)
External reference Static Batched
Rolling reference Rolling Rolling/sliding
Expanding reference Expanding Rolling/sliding
External reference Static Rolling/sliding

Thanks to @LorenaPoenaru!

Alternative profile plots for categoricals

There are categorical profile statistics, e.g. "most probable value", included in the report. The current histogram visualisation is unintuitive due to the unordered nature of the data. Popmon should have the option to display these more adequately.

example

Display histograms over time for EDA

When using popmon for exploratory data analysis of datasets with a time component it would be useful to inspect how variables change over time directly in the histograms.

The simplest example: for categorical variables with low cardinality we could show a table with counts.

Related to #122

Histogram juxtaposition

Right now the report shows only two numerical histograms per feature. That was for historical reasons, generating more was too slow. But that should no longer be a bottle neck now: we set the default to a much higher number.

Proposed solution

Show two numerical histograms next to each other, where both have a drop down menu where you can select the time-slice. So one can compare different time slices with each other.

Reject unsupported column types

Running popmon on a DataFrame with columns containing mutable sequences, tuples or sets generates cryptic errors. popmon should return an error message.

Apply Ruff to popmon

Run ruff and resolve any issues found

The popmon ruff configuration works with nbqa for notebooks, and has selected rules for tests, sphinx config and examples.

Some checks are disabled, but would benefit from manual refactoring.

There is also a set of rules that can be mechanically refactored, for violations that occur more often. Here autofix would be a great improvement over manual refactor:

 # Prefer autofix
"PD011", # [1] .to_numpy() instead of values
"PD003", # [1]`.isna` is preferred to `.isnull`; functionality is equivalent
"PT018", # [2] Assertion should be broken down into multiple parts
"RET504", # [3] Unnecessary variable assignment before `return` statement
"RET506", # Unnecessary `else` after `raise` statement
"PTH123", # [4] `open("foo")` should be replaced by `Path("foo").open()`
"PTH120", # [4] similar to above
"RET505", # Unnecessary `else` after `return` statement
"SIM102", # (when comments are in the statement) Use a single `if` statement instead of nested `if` statements
"SIM114", # (when comments are in the statement) Combine `if` branches using logical `or` operator
"G004", # Logging statement uses f-string

A couple of them are already in progress. Related autofix PRs:

\cc @charliermarsh in case you are interested. Thanks a lot for all your work on ruff :) I'm considering adding the remaining autofixes too if that is appreciated.

Example screenshots for easier value demonstration to people

It might be a good idea to have small screenshots from at least the Profile, Histograms and Traffic Lights reports.

Turns out that Pomon covers a lot of surface area (data profiling, alerting, anomaly detection, incremental data stability/comparision) and supports both pandas and Spark but unfortunately the README as it currently stands looks more targeted to data scientists while the tool looks useful for tracking the data coming from backend systems or being ingested from external sources.

Interactive plots

The generated reports currently contain static images, which is problematic since (1) the (base64) encoded image make that the reports are large (multiple MBs) and (2) there is no interactivity, so for instance it’s not possible to read the maximum value from a diagram. Both can be tackled by implementing another plotting library (such as Plotly or Altair) that renders the plots client-side. The challenge lies in serving the report and data together efficiently.

Incorrect license headers

#29 addressed license headers, unfortunately these are not the correct license headers.

the proper MIT license header is

MIT License

Copyright (c) [year] [fullname]

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Missing reference on README

It would be nice to give reference to people who have worked on it, considering that you have committed the whole project using your accounts. I see that some significant portion of this project is written by me and I believe some other ex-colleagues contributed as well.

code coverage of 100%

The risk of breaking functionality on introducing new features could be reduced by ensuring that each line of code is covered by the tests and that this is enforced at test time. Other repos, such as this also use this.

For that, we can include pytest-cov to the test dependencies and increase the test coverage until it passes (see this annswer).

Refactor: remove code duplication

Parts of the API in the code contains substantial amounts of duplicated code. Examples are arguments and docstrings that are defined for multiple levels or nested functions. An example is the regenerate function.

Pylint can be used to detect these issues in part:

-   repo: https://github.com/pycqa/pylint
     rev: pylint-2.6.0
     hooks:
     -   id: pylint

Refactoring the code improves readability and maintainability.

Arguments can be passed via **kwargs, perhaps we would like to use docrep to propagate the docstrings.

Pydantic is not pinned in pyproject.toml or requirements.txt

Pydantic has recently been updated to v2.0 (2023-06-30), but it is not pinned in pyproject.toml or requirements.txt. Could you please either update it or pin the version?

File "..../lib/python3.10/site-packages/popmon/config.py", line 24, in <module>
    from pydantic import BaseModel, BaseSettings
  File "..../lib/python3.10/site-packages/pydantic/__init__.py", line 206, in __getattr__
    return _getattr_migration(attr_name)
  File "..../lib/python3.10/site-packages/pydantic/_migration.py", line 279, in wrapper
    raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0/migration/#basesettings-has-moved-to-pydantic-settings for more details.

Histogram error on large floats

Running pm_stability_error on float columns with large values triggers (in some cases) Assertion Error.

For example running following code:

import pandas as pd
import numpy as np
import popmon

np.random.seed(1)
n = 1000
start_date = pd.to_datetime("2022-01-01")
example = pd.DataFrame({
    "dt": [start_date + pd.DateOffset(i//100) for i in range(n)], 
    "a": (np.random.rand(n) - 0.5) * 10**4
})
example.loc[len(example)//2, 'a'] *= 10**4
example.pm_stability_report(time_axis="dt", time_width="1w")

Gives following output:

% python popmon_bug.py
.../.virtualenvs/random/lib/python3.7/site-packages/histogrammar/dfinterface/make_histograms.py:172: UserWarning: time-axis "dt" already found in binning specifications. not overwriting.
  f'time-axis "{time_axis}" already found in binning specifications. not overwriting.'
2022-08-12 14:14:19,649 INFO [histogram_filler_base]: Filling 1 specified histograms. auto-binning.
100%|████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 463.15it/s]
2022-08-12 14:14:19,652 INFO [hist_splitter]: Splitting histograms "hists" as "split_hists"
2022-08-12 14:14:19,654 INFO [hist_comparer]: Comparing "split_hists" with rolling sum of 1 previous histogram(s).
2022-08-12 14:14:19,666 INFO [hist_profiler]: Profiling histograms "split_hists" as "profiles"
2022-08-12 14:14:19,692 INFO [hist_comparer]: Comparing "split_hists" with reference "split_hists"
2022-08-12 14:14:19,702 INFO [pull_calculator]: Comparing "comparisons" with median/mad of reference "comparisons"
2022-08-12 14:14:19,713 INFO [pull_calculator]: Comparing "profiles" with median/mad of reference "profiles"
2022-08-12 14:14:19,749 INFO [apply_func]: Computing significance of (rolling) trend in means of features
2022-08-12 14:14:19,752 INFO [compute_tl_bounds]: Calculating static bounds for "profiles"
2022-08-12 14:14:19,795 INFO [compute_tl_bounds]: Calculating static bounds for "comparisons"
2022-08-12 14:14:19,806 INFO [compute_tl_bounds]: Calculating traffic light alerts for "profiles"
2022-08-12 14:14:19,819 INFO [compute_tl_bounds]: Calculating traffic light alerts for "comparisons"
2022-08-12 14:14:19,825 INFO [apply_func]: Generating traffic light alerts summary.
2022-08-12 14:14:19,828 INFO [alerts_summary]: Combining alerts into artificial variable "_AGGREGATE_"
2022-08-12 14:14:19,831 INFO [report_pipelines]: Generating report "html_report".
2022-08-12 14:14:19,831 INFO [overview_section]: Generating section "Overview". skip empty plots: True
100%|████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 276.10it/s]
2022-08-12 14:14:19,842 INFO [histogram_section]: Generating section "Histograms".
  0%|                                                                         | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "popmon_bug.py", line 13, in <module>
    example.pm_stability_report(time_axis="dt", time_width="1w")
  File ".../python3.7/site-packages/popmon/pipeline/report.py", line 196, in df_stability_report
    reference=reference_hists,
  File ".../python3.7/site-packages/popmon/pipeline/report.py", line 71, in stability_report
    result = pipeline.transform(datastore)
  File ".../python3.7/site-packages/popmon/base/pipeline.py", line 69, in transform
    datastore = module.transform(datastore)
  File ".../python3.7/site-packages/popmon/pipeline/report_pipelines.py", line 250, in transform
    return super().transform(datastore)
  File ".../python3.7/site-packages/popmon/base/pipeline.py", line 69, in transform
    datastore = module.transform(datastore)
  File ".../python3.7/site-packages/popmon/base/module.py", line 50, in _transform
    outputs = func(self, *list(inputs.values()))
  File ".../python3.7/site-packages/popmon/visualization/histogram_section.py", line 141, in transform
    plots = parallel(_plot_histograms, args)
  File ".../python3.7/site-packages/popmon/utils.py", line 52, in parallel
    func(*args) if mode == "args" else func(**args) for args in args_list
  File ".../python3.7/site-packages/popmon/utils.py", line 52, in <listcomp>
    func(*args) if mode == "args" else func(**args) for args in args_list
  File ".../python3.7/site-packages/popmon/visualization/histogram_section.py", line 247, in _plot_histograms
    hists, feature, hist_names, y_label, is_num, is_ts
  File ".../python3.7/site-packages/popmon/visualization/utils.py", line 297, in plot_histogram_overlay
    len(bin_edges), len(bin_values), x_label
AssertionError: bin edges (+ upper edge) and bin values have inconsistent lengths: 43 vs 41. a

Dataset and analysis summary in report overview

The reports contain an Overview section, that would be able to provide general information on the dataset and analysis, such as the number of time bins, binning configuration, number of features, date generated etc. This information is currently missing form the report. Showing this information in a table would be a welcome addition to the reports.

[Question] Hourly data pipelines

Hi

I have more of a question around using the library as all the examples consists of building histograms that are wider then the defined time_width.

My setup consists of a complex project that has a lot of factors that can influence metrics that I'm interested in keeping an eye on.
I have a data pipeline that process the data on hourly basis that means that the data I have access to consists always of one hour. I was thinking of building separate historical histograms for each hour, as I want to compare apples to apples and eliminate the additional noise as I have a lot of seasonalities, within a day, week, month etc.

An example project could be users on a website and keeping track of their page views and generated revenue, and I want to early detect major shifts in page views.

In this usecase from my understanding a reference_type should be as "external" and the time_width would be 1h, and in the reports/metrics I would always have just one hour but then how would the stitch_histograms work, the replace functionality would not work right? and if I would like to control the size of stiched histograms I would need to cap it in a different way 🤔
Does this hourly setup make sense in popmon?

Best

Enable custom comparisons

Users that would like to extend the default comparisons need to modify the code in multiple places. We would like to simplify this process. As a test case, we could try to incorporate G-test or PSI). Ideally, the user would be able to pass a (sequence of) functions to the default pipeline.

Change verbosity level

popmon by default provides logging and progress bar (tqdm) output while computing the report. There currently is no user-friendly way to change the verbosity level altogether.

Include diptest to profiles

The diptest - test for unimodality - could be included as profile (https://pypi.org/project/diptest/)

With the registry api, this can be achieved by filling the following template

@Profile.register(key=["diptest_val", "p_value"], description=["test for unimodality", "p-value for diptest"])
def diptest(histogram):
   # your magic here
   return coeff, pval

Due to requirement of the package, it could be made optional and picked up when the package is installed in the environment.
In setup.py it could be registered as ‘extras’, allowing you to do pip install popmon[diptest]

Error: cannot import name 'Report' from 'popmon.config'

Code:

import popmon
from popmon import resources
from popmon.config import Report

Got error:
ImportError Traceback (most recent call last)
/tmp/ipykernel_707/1841834346.py in
3 import popmon
4 from popmon import resources
----> 5 from popmon.config import Report, Setting

ImportError: cannot import name 'Report' from 'popmon.config' (/home/user/.local/lib/python3.7/site-packages/popmon/config.py)

Group comparisons per reference in report

The comparisons section in the report lists all (active) comparisons. While reading a report, it would be more meaningful to focus on a reference. For instance the comparisons against the previous time slot have a different meaning to comparing to a fixed reference.

Proposed solution

Group comparisons per reference using subsections or tabs in the report

missing tutorial datasets

Hi,
awesome tool!

Advanced tutorial datasets are not in test_data dir, but still in notebooks dir, as far as I can see. Hence the advanced tutorial notebooks don't run out of the box, at least for me.
I don't have permissions to push to a develop branch.

Changes to be committed:
(use "git reset HEAD ..." to unstage)

renamed:    popmon/notebooks/flight_delays.csv.gz -> popmon/test_data/flight_delays.csv.gz
renamed:    popmon/notebooks/flight_delays_reference.csv.gz -> popmon/test_data/flight_delays_reference.csv.gz

Cheers - Alex

CDN option for Js imports

Give the user the option to have all Js imports from CDN hosted servers rather having the Js included in HTML report.

Error when I try to stitch histograms

I try to extract 2 histograms from 2 different datasets then stitch them together.
While I try to do that, I get this ValueError: Input histograms are not all similar

features = ["datetime:prog_revenue"]
may10_hists = may10_df.pm_make_histograms(
    features=features, time_axis="datetime", time_width="1h", time_offset="2023-05-10"
)
may9_hists = may9_df.pm_make_histograms(
    features=features, time_axis="datetime", time_width="1h", time_offset="2023-05-09"
)

hist_add = popmon.stitch_histograms(
    hists_basis=may9_hists, hists_delta=may10_hists, mode="add"
)

Before I encounter this error, I also get this warning at the stitching step.

Input SparselyBin histograms have inconsistent origin attributes: [1.6835904e+18, 1.6836768e+18]

Can someone help me resolve this issue? I am clueless about the resolution step for this issue.

overview of alerts per feature

An overview table should be added to the reports, so that users can quickly discover which features have the highest number of alerts.

Example:

PyPI sdist failure

The file requirements.txt isnt included in the PyPI sdist, resulting in

Traceback (most recent call last):
  File "setup.py", line 12, in <module>
    with open("requirements.txt") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.