Giter Club home page Giter Club logo

lolo's Introduction

Lolo

Lolo

Travis

Lolo is a random forest-centered machine learning library in Scala.

The core of Lolo is bagging simple base learners, like decision trees, to produce models that can generate robust uncertainty estimates.

Lolo supports:

  • continuous and categorical features
  • regression, classification, and multi-task trees
  • bagged learners to produce ensemble models, e.g. random forests
  • linear and ridge regression
  • regression leaf models, e.g. ridge regression trained on the leaf data
  • random rotation ensembles
  • recalibrated bootstrap prediction interval estimates
  • bias-corrected jackknife-after-bootstrap and infinitesimal jackknife confidence interval estimates
  • bias models trained on out-of-bag residuals
  • feature importances computed via variance reduction or Shapley values (which are additive and per-prediction)
  • model based feature importance
  • distance correlation
  • hyperparameter optimization via grid or random search
  • parallel training via scala parallel collections
  • validation metrics for accuracy and uncertainty quantification
  • visualization of predicted-vs-actual validations
  • deterministic training via random seeds

Usage

Lolo is on the central repository, and can be used by simply adding the following dependency block in your pom file:

<dependency>
    <groupId>io.citrine</groupId>
    <artifactId>lolo</artifactId>
    <version>6.0.0</version>
</dependency>

Lolo provides higher level wrappers for common learner combinations. For example, you can use Random Forest with:

import io.citrine.lolo.learners.RandomForestRegressor
val trainingData: Seq[TrainingRow[Double]] = TrainingRow.build(features.zip(labels))
val model = RandomForestRegressor().train(trainingData).model
val predictions: Seq[Double] = model.transform(testInputs).expected

Performance

Lolo prioritizes functionality over performance, but it is still quite fast. In its random forest use case, the complexity scales as:

Time complexity Training rows Features Trees
train O(n log n) O(n) O(n)
loss O(n log n) O(n) O(n)
expected O(log n) O(1) O(n)
uncertainty O(n) O(1) O(n)

On an Ivy Bridge test platform, the (1024 row, 1024 tree, 8 feature) performance test took 1.4 sec to train and 2.3 ms per prediction with uncertainty.

Contributing

We welcome bug reports, feature requests, and pull requests. Pull requests should be made following the feature branch workflow: branching off of and opening PRs into main.

Production releases are triggered by tags. The sbt-ci-release plugin will use the tag as the lolo version. On the other hand, lolopy versions are still read from setup.py, so version bumps are needed for successful releases. Failing to bump the lolopy version number will result in a skipped lolopy release rather than a build failure.

Code Formatting

  • Consistent formatting is enforced by scalafmt.
  • The easiest way to check whether scalafmt is satisfied is to run scalafmt from the command line: sbt scalafmtCheckAll. This will check whether any files need to be reformatted. Pull requests are gated on this running successfully. You can automatically check whether code is formatted properly before pushing to an upstream repository using a git hook. To set this up, install the pre-commit framework by following the instructions here. Then enable the hooks in .pre-commit-config.yaml by running pre-commit install --hook-type pre-push from the root directory. This will run scalafmtCheckAll before pushing to a remote repo.
  • To ensure code is formatted properly, you can run sbt scalafmtAll from the command line or configure your IDE to format files on save.

Authors

See Contributors

Related projects

  • randomForestCI is an R-based implementation of jackknife variance estimates by S. Wager

lolo's People

Contributors

gregor-robinson avatar indyaah avatar jamie-heller avatar latture avatar maxhutch avatar mrupp-citrine avatar mvenetos97 avatar pacdaemon avatar rpiotrow avatar sfriedowitz avatar sparadiso avatar teekennedy avatar wardlt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lolo's Issues

How to save the fitted RandomForestRegressor model?

How can I save a lolopy model?"

I tried to train a model like this:

from lolopy.learners import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X, Y)

After that, I attempted to save the model using:

joblib.dump(model, "./model.pkl")

But, it didn't work with the following error:

AttributeError: 'RandomForestRegressor' object has no attribute 'gateway'

Thank you for sharing your great program.

Reduced number of minimum feature size

Hello,

I noticed that the minimum sample size is limited to at least 8. This can be easily worked around with the np.tile command, repeating the original data until the minimum sample size is reached. I think it would be nice to have this built into the lolopy RF function.

(I'm fairly new to github and I'm sorry if there is any information missing here or this is not how git hub works. Please feel free to contact me if you need more information).

Example work-around:

#x= training data
#y=training label

dtr = RandomForestRegressor()
if y.shape[0]<8:
x=np.tile(self.x,(8,1))
y=np.tile(self.y,(8,1))

    dtr.fit(x,y)

Fail on invalid subsetStrategy

Currently io.citrine.lolo.learners.RandomForest (and io.citrine.lolo.learners.ExtraRandomTrees, which emulates the RF interface) defaults to automatic subset strategy selection when the parameter subsetStrategy is an invalid string. This is an opportunity for an unobservable error. I propose hardening this interface with a small modification: throw an exception when the parameter doesn't match. An informative exception should also be thrown if the type doesn't match.

Encapsulating the available options in a SubsetStrategy class would also present a more foolproof interface, and would address the type laxity that @mrupp-citrine pointed out, but I don't think that's worth the trouble.

Bug: `empty head` exception during training

When training many models via lolopy this error sometimes occurs (see below for a full stack trace). It only occurs for small training set sizes (n=10). It occurs non-deterministically.

This error might be related to the Poisson sampling in file Bagger.scala, lines 59-69.

Stack trace:

  File "/Users/mrupp/Data/Citrine/2019-05-01_PredictiveUncertaintyBenchmark/sse-uncertainty-benchmark/smlb/smlb/results.py", line 378, in run
    result = methodf(inputs[traininds], labels[traininds], inputs[validinds])
  File "<string>", line 78, in train_and_predict
  File "/Users/mrupp/Citrine/GitHub/lolo/python/lolopy/learners.py", line 79, in fit
    result = learner.train(train_data, self.gateway.jvm.scala.Some(weights_java))
  File "/Users/mrupp/Local/anaconda3/lib/python3.7/site-packages/py4j/java_gateway.py", line 1286, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/Users/mrupp/Local/anaconda3/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o52491.train.
: java.lang.UnsupportedOperationException: empty.head
	at scala.collection.immutable.Vector.head(Vector.scala:185)
	at io.citrine.lolo.trees.splits.RegressionSplitter$.getBestSplit(RegressionSplitter.scala:44)
	at io.citrine.lolo.trees.regression.RegressionTreeLearner.train(RegressionTree.scala:70)
	at io.citrine.lolo.trees.regression.RegressionTreeLearner.train(RegressionTree.scala:20)
	at io.citrine.lolo.bags.Bagger.$anonfun$train$10(Bagger.scala:79)
	at io.citrine.lolo.bags.Bagger.$anonfun$train$10$adapted(Bagger.scala:74)
	at scala.collection.parallel.immutable.ParRange$ParRangeIterator.map2combiner(ParRange.scala:104)
	at scala.collection.parallel.ParIterableLike$Map.leaf(ParIterableLike.scala:1052)
	at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:49)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
	at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:63)
	at scala.collection.parallel.Task.tryLeaf(Tasks.scala:52)
	at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:46)
	at scala.collection.parallel.ParIterableLike$Map.tryLeaf(ParIterableLike.scala:1049)
	at scala.collection.parallel.FutureTasks.$anonfun$exec$5(Tasks.scala:499)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
	at scala.util.Success.$anonfun$map$1(Try.scala:251)
	at scala.util.Success.map(Try.scala:209)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:140)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

Unbias standard deviation estimator

getStdDevMean currently uses a biased variance estimator the square root of the sample variance. This should be unbiased by replacing the denominator with treePredictions.length - 1 treePredictions.length - 1.5 or similar superlinear bias correction. This should be done with care to avoid introducing a bias to the jackknife code which couples to the same treeVariance (probably best to just rescale in getStdDevMean, which already takes a sqrt).

Let users pass an RNG object to nondeterministic functions

For the sake of reproducible results, including deterministic unit and integration testing, Bagger and GuessTheMeanLearner should have access to a user-specified RNG object (at least at training time). It would be possible to expose an interface that allows users to specify a RNG seed, but that is strictly less powerful and prevents the use of desirable alternative RNGs such as those that are counter-based. Any other nondeterministic functionality should similarly accept an RNG object, so please note any other nondeterminism to keep in mind -- cross validation, for example.

Better Error Messages for Lolopy

The error message for lolopy when java isn't installed is: ValueError: invalid literal for int() with base 10: b''

We should make a better error message for this issue, and probably print warnings if you're running on Windows and the confidence intervals are not likely to work.

Option to standardize training data

Many learners perform better after standardization. Options include:

  • Rescale continuous variables to fall on [-1,1] with zero mean
  • Rescale continuous variables to have zero mean and unit variance
  • Collecting infrequent categories to ensure high categorical populations
  • Adding an "Other" category to remove low population categoricals

Turn of Parallelism on Demand

There are cases where I want to train a bagged model in serial. A constructor argument for the bagger class that turns off parallelism would be nice.

Use BaggerHelper in Bagger

BaggerHelper, introduced in #210, aims to subsume functionality common to Bagger and MultiTaskBagger so that the shared functionality can be maintained in one place. It is currently used only by MultiTaskBagger, so a subsequent PR should swing over functionality from Bagger as well.

Pypi deployment broken

The most recent pypi deployment deterministicly fails with:

Uploading distributions to https://upload.pypi.org/legacy/
Uploading lolopy-1.0.5-py2.py3-none-any.whl
100%|██████████| 49.1M/49.1M [00:02<00:00, 19.7MB/s]
NOTE: Try --verbose to see response content.
HTTPError: 400 Client Error: The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information. for url: https://upload.pypi.org/legacy/
PyPI upload failed.

which is taken from this travis build

Categorical input support for lolopy

I might be mistaken, but lolopy does not seem to support categorical inputs. Input of categorical features fails in utils.py with an attempted cast of X to np.float64. @WardLT

If there's a set way of providing categoricals to lolopy, it'd be useful to document or provide an example.

Add `minDistinctLabels` to decision tree to prevent UQ collapse in Bagger

If the training labels have repeats of label values, then it is increasingly possible that every tree in the ensemble makes the same prediction (even if the input values are different). This could be prevented by imposing a minimum number of distinct label values in the leaves of the decision trees. That would significantly increase the likelihood that different trees had different pairs of label values in the leaf that hits a prediction, and therefore make different predictions, and therefore has some predictive uncertainty.

cc: @bfolie

Allow max_depth = 0

It should be possible to have no splits in a tree (to get just bagged guessthemean or bagged linearregression).

Bug: lolopy merit function broken

Calling _call_lolo_merit() with certain metrics throws a TypeError. Reprex below:

from lolopy.metrics import _call_lolo_merit

y_true = [1]
y_pred = [1]
y_std  = [0.1]

# {Medcouple, RMSE, MSE, MAE, StandardRMSE, Skew, Median} 
# all throw a TypeError with the following call:
_call_lolo_merit("RMSE", y_true, y_pred, y_std = y_std)

# {StandardConfidence and UncertaintyCorrelation} 
# do not throw this error

On my machine, this reports:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-2-ec224d5a4afe> in <module>
----> 1 _call_lolo_merit("RMSE", y_true, y_pred, y_std = y_std) # Throws an error: TypeError: 'JavaPackage' object is not callable

~/anaconda3/lib/python3.7/site-packages/lolopy/metrics.py in _call_lolo_merit(metric_name, y_true, y_pred, y_std, *args)
     41 
     42     # Run the prediction result through the metric
---> 43     return metric.evaluate(pred_result, y_true_java)
     44 
     45 

TypeError: 'JavaPackage' object is not callable

Training a RandomForestRegressor with boolean feature(s) results in model with no signal

On training a lolopy RandomForestRegressor learner using features that include a feature of type numpy.bool_, the resulting model has no signal. Removing the boolean feature or converting it into numpy.int_ results in an "expected" model.

In this example, I'm using matminer to featurize a bunch of inorganic formulae, and training the RF regressor with default parameter values. The dataset I used is attached and the code is below. The output from running the code is

model with boolean feature r2: -0.69
model without boolean feature r2: 0.74
model with boolean feature as int r2: 0.77

Code:

import numpy as np
import pandas as pd
from sklearn.metrics import r2_score
from sklearn.model_selection import KFold
from matminer.featurizers.base import MultipleFeaturizer
from matminer.featurizers.conversions import StrToComposition
from matminer.featurizers.composition import ElementProperty
from matminer.featurizers.composition import Stoichiometry
from matminer.featurizers.composition import ValenceOrbital
from matminer.featurizers.composition import IonProperty
from lolopy.learners import RandomForestRegressor as LoloRandomForestRegressor


""" My tests used the following versions of the software
numpy version: 1.25.0
pandas version: 2.0.3
matplotlib version: 3.6.2
sklearn version: 1.3.0
pymatgen version: 2023.2.22
matminer version: 0.8.0
lolopy version: 3.0.0
"""


def featurize_data(csv_file: str = "data.csv", col_id: str = "composition"):
    df = pd.read_csv(csv_file)
    # convert formulae into pymatgen composition objects for matminer
    df = StrToComposition(target_col_id="pmg_composition").featurize_dataframe(
        df, col_id=col_id
    )
    # featurize compositions
    featurizer = MultipleFeaturizer(
        [
            Stoichiometry(),
            ElementProperty.from_preset("magpie"),
            ValenceOrbital(props=["avg"]),
            IonProperty(fast=True),
        ]
    )
    return featurizer.featurize_dataframe(df, col_id="pmg_composition")


def evaluate_model(model: LoloRandomForestRegressor, X: np.array, y: np.array):
    y_true = []
    y_pred = []
    y_unct = []
    for train, test in KFold(n_splits=5, shuffle=True).split(X):
        model.fit(X[train], y[train])
        y_pred_, y_unct_ = model.predict(X[test], return_std=True)
        y_true.extend(y[test])
        y_pred.extend(y_pred_)
        y_unct.extend(y_unct_)
    return y_true, y_pred, y_unct


def evaluate_model_with_boolean_feature(features_df: pd.DataFrame):
    # extract training examples from featurized dataframe (retain boolean feature as is)
    feature_columns = set(features_df.columns) - set(
        ["composition", "melting_temperature", "pmg_composition"]
    )
    X = np.array(features_df[list(feature_columns)].values)
    y = np.array(features_df["melting_temperature"].values)

    lolo_rfr = LoloRandomForestRegressor()
    y_true, y_pred, _ = evaluate_model(lolo_rfr, X, y)
    print(f"model with boolean feature r2: {r2_score(y_true, y_pred):0.2f}")


def evaluate_model_without_boolean_feature(features_df: pd.DataFrame):
    # extract training examples from featurized dataframe (remove boolean feature)
    assert isinstance(features_df["compound possible"][0], np.bool_)

    feature_columns = set(features_df.columns) - set(
        ["composition", "melting_temperature", "pmg_composition", "compound possible"]
    )
    X = np.array(features_df[list(feature_columns)].values)
    y = np.array(features_df["melting_temperature"].values)

    lolo_rfr = LoloRandomForestRegressor()
    y_true, y_pred, _ = evaluate_model(lolo_rfr, X, y)
    print(f"model without boolean feature r2: {r2_score(y_true, y_pred):0.2f}")


def evaluate_model_with_boolean_feature_as_int(features_df: pd.DataFrame):
    # extract training examples from featurized dataframe (retain boolean feature as int)
    assert isinstance(features_df["compound possible"][0], np.bool_)
    features_df["compound possible"] = features_df["compound possible"].astype(int)
    assert isinstance(features_df["compound possible"][0], np.int_)

    feature_columns = set(features_df.columns) - set(
        ["composition", "melting_temperature", "pmg_composition"]
    )
    X = np.array(features_df[list(feature_columns)].values)
    y = np.array(features_df["melting_temperature"].values)

    lolo_rfr = LoloRandomForestRegressor()
    y_true, y_pred, _ = evaluate_model(lolo_rfr, X, y)
    print(f"model with boolean feature as int r2: {r2_score(y_true, y_pred):0.2f}")


if __name__ == "__main__":
    features_df = featurize_data("melting_temperatures_prb.csv")
    evaluate_model_with_boolean_feature(features_df)
    evaluate_model_without_boolean_feature(features_df)
    evaluate_model_with_boolean_feature_as_int(features_df)

melting_temperatures_prb.csv

Expose predictions made by individual trees in ensemble

It's helpful for various reasons to have access to the individual predictions made by each tree in the ensemble, in addition to the usual average over ensemble, uncertainties, etc. that are already accessible (@gregor-robinson you have the context for this.).

It would be great to have this exposed in lolopy, perhaps through a dedicated alternative to .predict() similar to get_importance_scores().

Change in predictive uncertainties from lolopy v1.0.4 to v1.1.0

Train a lolo random forest on n=100 training points (x,y), where x from -5 to 5 and y(x) = f(x) + eps, with f(x) = 2x+1 and eps ~ N(0,1).

In lolopy version 1.0.4, the mean predictive uncertainty is around 1, matching the aleatoric uncertainty / noise.

In the next published lolopy version 1.1.0, mean predictive uncertainties are around 0.5, which seems too low given the noise.

What changed, and why?

Here is an example with smlb:

n, m, xlen = 100, 600, 10
train_inputs = np.reshape(np.linspace(-xlen / 2, +xlen / 2, n), (n, 1))
train_labels = (train_inputs * 2 + 1).flatten()
train_data = smlb.TabularLabeledData(data=train_inputs, labels=train_labels)
train_data = smlb.LabelNoise(noise=smlb.NormalNoise(rng=0)).fit(train_data).apply(train_data)

valid_inputs = np.reshape(np.linspace(-xlen / 2, +xlen / 2, m), (m, 1))
valid_labels = (valid_inputs * 2 + 1).flatten()
valid_data = smlb.TabularLabeledData(data=valid_inputs, labels=valid_labels)
valid_data = smlb.LabelNoise(noise=smlb.NormalNoise(rng=1)).fit(valid_data).apply(valid_data)

rf = RandomForestRegressionLolo()
preds = rf.fit(train_data).apply(valid_data)
mae = smlb.MeanAbsoluteError().evaluate(valid_data.labels(), preds)

# for perfect predictions, expect MAE of 1.12943
# (absolute difference between draws from two unit normal distributions)
assert np.allclose(mae, 1.13, atol=0.25)
assert np.allclose(np.median(preds.stddev), 1, atol=0.5)

Erroneous error message

I get the error message "We need to have at least 8 rows, only 8 given" (see below) when running lolopy.learners.RandomForestRegressor. Independent of whether the error itself makes sense, the message does not.

py4j.protocol.Py4JJavaError: An error occurred while calling o2.train.
: java.lang.IllegalArgumentException: requirement failed: 
        We need to have at least 8 rows, only 8 given
	at scala.Predef$.require(Predef.scala:277)
	at io.citrine.lolo.bags.Bagger.train(Bagger.scala:37)
	at io.citrine.lolo.learners.RandomForest.train(RandomForest.scala:70)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.base/java.lang.Thread.run(Thread.java:834)

Investigate using multinomial bootstrap

Bagger uses a Poisson bootstrap. This converges in probability to the ordinary multinomial bootstrap in the large data limit, but we should confirm it's a suitable approximation for our small data scenarios.

Use of lolopy RF regressor in Anaconda

Hi all,

I'm interested on applying this RF regressor. I have succesfuly used the RF regressor from scikit learn with my training and testing data set.

I have tried to use the lolopy RF regressor in Anaconda , but It doesn't work ,

On the CMDPROMPT I did: pip install lolopy

Code:

model = RandomForestRegressor(num_trees=500)
model.fit(X_train,y_train)

y_pred, y_std = model.predict(X_test, return_std=True)

display(y_pred)
display(y_std)

I get this error:
FileNotFoundError: [WinError 2] The system cannot find the file specified

Can someone provide some guidance regarding what could be the issue?
Thanks!

Cross validation learner to implement `getLoss`

Right now, hyperparameter optimization is supported only with the bagged learner, which implements getLoss with out of bag estimates. A cross validation learner could give any learner a reasonable getLoss method, e.g. ridge regression, and facilitate hyperparameter optimization.

Not able to reproduce results.

In the latest lolopy version (1.2.0), I fixed random_seed but still, results are not reproducible (I have already fixed numpy random seed). Can you please fix it or tell me the reason for this?

Make linear regression numerically stable.

LinearRegressionLearner solves for linear coefficients a pseudoinverse, which is numerically unstable. It should be trivial to replace this with a LAPACK dgels or dgelsd.

Gradient boosted machines

Gradient boosted trees can outperform random forests, given proper selection of hyperparameters. In lolo, gradient boosting could be a general component in learner composition, e.g. boosting two linear models against one another.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.