Giter Club home page Giter Club logo

goose's Introduction

Goose: A Powerful Load Testing Framework

What is Load Testing?

Load testing is a critical step in ensuring your web application can handle real-world traffic patterns. It helps you identify performance bottlenecks, optimize resource allocation, and ensure a seamless user experience.

Why Choose Goose?

  • Fast and Scalable: Built with Rust, Goose is designed for speed and scalability.

  • Flexible and Customizable: Supports simple and complex load tests, tailored to mimic real-world user behavior.

  • Realistic User Behavior Simulation: Goes beyond just sending requests; simulates user behaviors like logging in, filling out forms, and navigating through your application.

  • Have you ever been attacked by a goose?

Getting Started

To use Goose, you'll need to write a Rust application using the Goose library. Then, compile it to create a tailored load testing tool specific to your needs.

You may find the following resources helpful:

Simple Example

Check out our examples on GitHub. You can also use Goose Eggs, a helper crate that provides useful functions for writing load tests, such as validation helpers for HTTP responses.

Community and Support

Developed by Tag1 Consulting, Goose has a growing community and a series of blog posts and podcasts detailing its features, comparisons with other tools, and real-life testing scenarios.

crates.io Documentation Apache-2.0 licensed CI

goose's People

Contributors

alexliesenfeld avatar alsuren avatar bbros-dev avatar elijahlynn avatar epompeii avatar finnefloyd avatar jcarres-mdsol avatar jeremyandrews avatar kazimir-malevich avatar lionsad avatar medwards avatar michael-hollister avatar mtsbucy1 avatar nicompte avatar nnewton avatar playpauseandstop avatar psibi avatar raffaeleragni-virtualminds avatar s7evink avatar slashrsm avatar vemoo avatar vilinski avatar yds12 avatar zoicho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goose's Issues

allow custom configuration of Reqwest client

Currently Goose configures the Reqwest client as follows:

use reqwest::Client;

static APP_USER_AGENT: &str = concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION"));

let builder = Client::builder()
  .user_agent(APP_USER_AGENT)
  .cookie_store(true);

So far this seems to be a good default, but it needs to be possible to change this configuration. For example, to change the USER_AGENT, or to set default headers on all requests. All available options are defined here:
https://docs.rs/reqwest/*/reqwest/struct.ClientBuilder.html

[Feature request] The ability to specify request per second

It will be useful to be able to specify load in other ways, not just users.
At least in the environment I am in we look at response times always with respect to req/s coming to the system. Something like 99th of my response time is no larger than Xms given I get xx req/s or less.

Tweaking the users parameter can approximate the desired req/second but it is manual tuning and not portable between machines. It would be fantastic if I could pass a param to Goose and it tries to more or less keep around the number given.

I am not sure how to deal with big numbers, if an user asks for creating 100,000 req/s does Goose start gazillions of threads trying to accomodate? So maybe this feature is limited to lowish numbers, 50req/s or something like that, at least on a first iteration.

upgrade httpmock, remove mockito

Version 0.4 of httpmock gained the ability to start multiple servers on different ports, so we no longer need to install two mocking libraries. Upgrade httpmock, and remove mockito.

add support for startup/teardown on_test_start/on_test_stop

In current versions of Locust this is referred to as on_test_start and on_test_stop:
https://docs.locust.io/en/latest/writing-a-locustfile.html#test-start-and-test-stop-events

In earlier versions of locust this was referred to as startup and teardown:
https://docs.locust.io/en/0.14.6/writing-a-locustfile.html#setups-teardowns-on-start-and-on-stop

In all cases, this allows for the creation of a task that runs only once, regardless of how many clients or how many workers, at the very start of the test and the very end of the test (versus on_start and on-stop which runs once-per-client).

We're using startup on a client Locust test, and will need this functionality to properly convert the test to Goose.

update prelude documentation

The prelude documentation hasn't been updated as the prelude changes. It should be up-to-date with the actual current prelude.

de-duplicate testing code

There is a considerable amount of duplicated code between integration tests. Leverage and extend the work started in #125 to move more common code into common.rs and remove code duplication.

Some discussion here: #129 (review)

also optionally provide per-task-function statistics

Currently statistics are per-request. With #103 tasks return a Result() giving us insight into how often the entire Task succeeds and/or fails. This issue is to optionally provide statistics about this at the end of a load test.

One specific use-case of this feature would be to confirm that all task sets have been assigned to a GooseUser, and that all contained tasks have actually run. These statistics would provide insight into how often each Task is running, and whether any were skipped (ie, due to weighting and too few users).

Abstract client response

Currently if you need to change whether a test succeeded or failed, Goose assumes you're referring to the previous request. (And in PR #37 it requires you to tell it the method and path you're changing).

Per @LionsAd 's feedback in #37, a better solution is for calls to Client to return GooseResponse which includes the Reqwest Response, but also includes the request. This will allow doing things like:
client.set_failure(response); much simpler, cleaner, and more reliable.

not testing actual run-time flags and options

During a recent --help and general documentation cleanup, I reworked the Goose --workers-expected option to use the Structopt required_if attribute, as it is required if --master is enabled. The implementation seems to actually be Clap: required_if.

Possibly it's as simple as --master is a flag, not an option, and required_if only works with options -- this isn't clear to me from the documentation.

The bigger problem is that our tests did not catch this, only manually running Goose caught it. This issue is to follow up so we are properly testing configuration options to avoid something similar to this from slipping through again. (When actually running Goose, we use GooseConfiguration::from_args() to instantiate the configuration struct. When testing, we use tests/common/build_configuration() to manually construct a configuration struct.

increase granularity of statistics reporting

Currently statistics are displayed as integers. With lower numbers, this can lead to confusing results. For example, a few different URLs may be getting requested .95 times per second, each showing up as making 0 requests per second, with an aggregated total of 5 requests per second.

Statistics are designed to be displayed in a standard terminal window, which doesn't give a lot of width to work with. Likely logic needs to be improved so the value is displayed as a float with more decimals displayed for lower numbers than for higher numbers.

automated test of actual load test

Currently all our automated tests are limited to contained functions. What's not yet tested is actually creating and executing a load test. I imagine this requires some sort of mock server to run the load test against.

This ticket is to test any load test at all. Once that's working, we can enhance to test various variations (ie, tests w/ on_start tasks, a test_start_task task, tests w/ only on_start tasks, gaggles, etc) in followup tickets.

Provide a per-client-thread mutable bucket for arbitrary data

When working with Locust, you can arbitrarily add fields to the client object, and this is available anywhere the client is invoked. This ultimately allows the sharing of arbitrary data between tasks within a taskset.

My general idea is to add a single field to the GooseClientState struct, which itself is either a HashMap or a BTreeMap of mutex-wrapped-generics. Then, when writing a load test, you could optionally create a custom Struct with whatever fields you want to share between tasks within a taskset and read/write from/to it as needed.

There are advantages to both structures, likely it would be best to add two buckets to the load test: one that's a HashMap, and one that's a BTreeMap. If a load test doesn't use a bucket, there's not enough overhead to matter. In general the preference would be to use the HashMap for best performance, and the BTreeMap when you need to do more complex comparisons etc.

A nice overview of the two collection types, and the advantages/disadvantages of each:
https://doc.rust-lang.org/std/collections/index.html

Options for running statistics and summary statistics

Currently stats are enabled or disabled "globally". Either the running statistics every 15seconds and the final summary both appear or if disabled neither appear.
In my use-case I love the final statistics but pretty much want to get rid of the running statistics but it is not possible. All the structs are also private so can't do much from there either.

--expect-workers not required unless --master enabled

A recent documentation update has regressed running a standard load-test. It is incorrectly claiming --expect-workers is required even when not running in manager mode. For example:

 cargo run --example drupal_loadtest -- -H http://local.dev/ -t10 --throttle-requests 5
    Finished dev [unoptimized + debuginfo] target(s) in 0.10s
     Running `target/debug/examples/drupal_loadtest -H 'http://local.dev/' -t10 --throttle-requests 5`
error: The following required arguments were not provided:
    --expect-workers <expect-workers>

USAGE:
    drupal_loadtest --debug-log-file <debug-log-file> --debug-log-format <debug-log-format> --expect-workers <expect-workers> --hatch-rate <hatch-rate> --host <host> --log-file <log-file> --manager-bind-host <manager-bind-host> --manager-bind-port <manager-bind-port> --manager-host <manager-host> --manager-port <manager-port> --metrics-log-file <metrics-log-file> --metrics-log-format <metrics-log-format> --run-time <run-time> --throttle-requests <throttle-requests>

If you follow its directions, it correctly tells you it's wrong:

$ cargo run --example drupal_loadtest -- -H http://local.dev/ -t10 --throttle-requests 5 --expect-workers=1
    Finished dev [unoptimized + debuginfo] target(s) in 0.09s
     Running `target/debug/examples/drupal_loadtest -H 'http://local.dev/' -t10 --throttle-requests 5 --expect-workers=1`
Error: InvalidOption { option: "--expect-workers", value: "1", detail: Some("--expect-workers is only available when running in manager mode") }

preserve actual URL requested

Load tests typically define the URI that will be loaded, and then the actual URL is built at run-time based on which environment is being tested against. The URL used should be included in the GooseRawRequest for debugging and logging purposes.

change signature of task function to return a Result

Currently Goose Task functions do not return anything, preventing us from using the ? shortcut to handle errors. This requires the use of more boilerplate code when writing load tests than is desirable. This issue is to implement the standard Rust pattern for returning a Result<T, E>.

Once implemented, from load test functions, on success we'll need to return Ok(), on failure we'll need to return Err().

document public modules

On docs.rs the goose module has a description, but the logger and prelude modules do not. Add comments.

provide helpers for setting custom defaults for flags and options

It's currently possibly to set a default hostname for the load test, removing the requirement to call --host= when running from the command line (unless you want to override the default).

It should be possible to do this for most all CLI options. This offers a few advantages, including: 1) you can hand a compiled and fully configured load-test to a team to run without the need of explaining to them how to set a complicated assortment of flags and options, 2) when controlling Goose from another application (for example a UI) you may not even be invoking the tool from the command line.

So in addition to [.set_host()'](), we'd also have .set_users(), .set_hatch_rate()`, etc ... And as with set_host, it would be a default that could be overridden by invoking the appropriate command line option/flag.

statistics calculations taking too much CPU

Especially when running lengthy/large load tests, the statistics calculations are taking up a large amount of CPU power. Also the sheer amount of data being collected by clients and passing to the parents is slowing things down.

Need to optimize response times similar to how Locust does it, with some rounding, and maintaining running counters.

Task: Create a loadtest test that is fed by a generator function

In general:

Make test writing as simple as:

    // Configure endpoints to test.
    let test_endpoints = vec![
        LoadtestEndpoint {
            method: "GET",
            path: INDEX_PATH,
            status: 200,
            weight: 9,
        },
        LoadtestEndpoint {
            method: "GET",
            path: ABOUT_PATH,
            status: 200,
            weight: 3,
        },
    ];

architect support for multiple clients and protocols

Goose currently makes HTTP/HTTPS requests with the Reqwest client.

The intent is to support multiple HTTP/HTTPS clients (ie, Isahc, curl, hyper, ...), as well as to support other protocols (ie, gRPC), ...).

This ticket is to decide how to best support multiple clients and protocols.

  1. My initial thought was to add protocol/client source files. For example, splitting the current HTTP support into src/http/reqwest.rs and src/http/common.rs. This could be combined with feature flags, which could be used to enable the different clients depending on your load testing needs (and would be necessary to avoid compiling unused dependencies).

  2. My second idea is to split Goose into multiple libraries. So, you'd have a goose library which is the common functionality needed by all protocols and clients. Then you'd have a goose_http which depends on goose and adds the shared http functionality but wouldn't do anything useful itself. And finally you'd have a goose_reqwest library which would expose the actual client. In this design, a load test would depend on ie the goose_reqwest library instead of on goose itself.

[BUG] It is not safe to initialize the logger more than once

Description

Expected behavior: Tests pass
Observed behavior: Tests fail with stack overflow error

Backtrace (from rust-lldb):

    frame #19619: 0x0000000100c3b72c setup_teardown-b971616726328643`core::fmt::write::h8ef98027ac1df1be at mod.rs:1076:17 [opt]
    frame #19620: 0x0000000100c3c83e setup_teardown-b971616726328643`core::fmt::Formatter::write_fmt::hf86f2b0bebdd7d52 at mod.rs:1505:9 [opt]
    frame #19621: 0x00000001000cc211 setup_teardown-b971616726328643`_$LT$ctrlc..error..Error$u20$as$u20$core..fmt..Display$GT$::fmt::h321ae19602832f09(self=&0x700009555838, f=&0x700009554460) at error.rs:24:9
    frame #19622: 0x00000001000cc0f8 setup_teardown-b971616726328643`_$LT$$RF$T$u20$as$u20$core..fmt..Display$GT$::fmt::h8da1869c7b991d97(self=&0x700009554560, f=&0x700009554460) at mod.rs:1981:62
    frame #19623: 0x0000000100c3b72c setup_teardown-b971616726328643`core::fmt::write::h8ef98027ac1df1be at mod.rs:1076:17 [opt]
    frame #19624: 0x0000000100c3c83e setup_teardown-b971616726328643`core::fmt::Formatter::write_fmt::hf86f2b0bebdd7d52 at mod.rs:1505:9 [opt]
    frame #19625: 0x00000001000cc211 setup_teardown-b971616726328643`_$LT$ctrlc..error..Error$u20$as$u20$core..fmt..Display$GT$::fmt::h321ae19602832f09(self=&0x700009555838, f=&0x7000095545d0) at error.rs:24:9
    frame #19626: 0x0000000100c3b72c setup_teardown-b971616726328643`core::fmt::write::h8ef98027ac1df1be at mod.rs:1076:17 [opt]
    frame #19627: 0x0000000100c3b51b setup_teardown-b971616726328643`_$LT$core..fmt..Arguments$u20$as$u20$core..fmt..Debug$GT$::fmt::h75ecacb307b1ffb2 [inlined] _$LT$core..fmt..Arguments$u20$as$u20$core..fmt..Display$GT$::fmt::h8d1074c567f05b9f at mod.rs:422:9 [opt]
    frame #19628: 0x0000000100c3b4d8 setup_teardown-b971616726328643`_$LT$core..fmt..Arguments$u20$as$u20$core..fmt..Debug$GT$::fmt::h75ecacb307b1ffb2 at mod.rs:415 [opt]
    frame #19629: 0x00000001007dc898 setup_teardown-b971616726328643`_$LT$$RF$T$u20$as$u20$core..fmt..Display$GT$::fmt::hb2dc8ebe93d2ba7d(self=&0x700009555470, f=&0x7000095546d0) at mod.rs:1981:62
    frame #19630: 0x0000000100c3b72c setup_teardown-b971616726328643`core::fmt::write::h8ef98027ac1df1be at mod.rs:1076:17 [opt]
    frame #19631: 0x000000010005774f setup_teardown-b971616726328643`std::io::Write::write_fmt::he282639b82d1b4ff(self=&0x102702cd4, fmt=<unavailable>) at mod.rs:1537:15
    frame #19632: 0x000000010007ed21 setup_teardown-b971616726328643`_$LT$simplelog..loggers..writelog..WriteLogger$LT$W$GT$$u20$as$u20$log..Log$GT$::log::h33d7cf67046c67cd [inlined] simplelog::loggers::logging::write_args::hc28e0079714279aa(record=&0x700009555608, write=&0x102702cd4) at mod.rs:475:9

In essence and endless loop in format, which might be a rust core bug.

Tests run in parallel, but even running them serially re-initializing the logger fails in catastrophe.

Workaround

Comment out self.initialize_logger() and tests pass again

Analysis

  • Logger MUST not be initialized twice
  • Logger right now is happily re-opening write access to the same file => race conditions

Proposed resolution

=> Use some STATIC to ensure this never happens

NNG Usage

I saw Jeremy's post on Reddit about Goose and decided to check it out since this is the first time I get to see someone other than my use my NNG crate. Going through the code, I had a couple of minor suggestions:

  1. Why is ACTIVE_WORKERS inside of a mutex? Being behind a mutex and atomic seems redundant when you never access it without the lock.
  2. There are times when you serialize to a buffer then immediately copy it to a Message. You can make the code faster by directly serializing to a Message or cleaner by directly submitting the Vec<u8> to Socket::{try_}send.
  3. You often convert the Message to a slice before reading deserializing from it - you should be able to directly read from the message. This won't change performance, but it might be a little cleaner.
  4. The NNG crate has a v1.0.0-rc2.2 which has a few minor API changes and reflect what will probably be the stable API. My "release schedule" for this crate is largely based on when I'm able to get papers submitted to conferences, so it's really only stuck as an RC because I haven't submitted anything since I pushed it.

Those suggestions aside, the main reason I'm opening this issue is that I would love any feedback you have on the API. I know there are a couple of people using the crate but none (until now) have code I can look through or a place where I can request feedback. I like to think I've made some solid design decisions, but I only have my personal thoughts to work off of.

One things I specifically would appreciate input on is how to handle the fact that NNG requires a relatively new version of CMake that isn't available on any of the Ubuntu LTS. If I was the only one using this code, I would just not upgrade nng-rs until something critical is needed from a newer version of NNG and I'd figure it out from there. As such, I would definitely appreciate your thoughts on how you would like to see it handled (e.g.: Is LTS support important? Are you fine with using an updated CMake?)

Thanks!
Nate

sort statistics

Currently Goose displays statistics in a random order. This makes it difficult to compare results of multiple tests. It would be much easier to compare results if statistics were sorted and always displayed in the same order.

refactor GooseAttack to return a Result

Currently GooseAttack doesn't return anything. This issue is to refactor the code to return useful errors, and to make collected statistics available for better integration into other applications (and tests).

standardize error details

We have several variations on error details with the same intent:

  • --no-hash-check is only available to the manager
  • --no-hash-check is only available when running in manager mode
  • --users option only available to manager process
  • --debug-log-file can only be enabled in stand-alone or worker mode
  • --throttle-requests can only be enabled in stand-alone mode or worker mode

Fabian proposes a different wording:

  • can only be set on the manager

Often "only available to the manager" isn't correct, as the option is also available in stand-alone mode as well. Maybe better, "this option can not be set on the worker" ?

write/send statistics

Currently Goose processes all statistics internally, and only shares them as optional running-stats during the test and a final summary at the end. With #37, client threads push raw statistics to the parent, so it's now trivial for the parent to also optionally write these statistics to a log file, or send them to a remote service.

We need to define the format of the logs written/sent.

Locally, to start, it's enough to write to a log file in a consistent CSV format. Something like:

timestamp, method, url, name, response_time, status_code, success, update

Remotely, to start, it's enough to send logs in a consistent JSON format.

Currently threads only send method and name, we'll have to add url to GooseRawRequest so it can be included in logs.

subtract overhead from delay between starting users and throttling requests

When starting users, and when throttling request, the code is currently naive about how long it delays, always sleeping for a set amount of time regardless of how long is spent on the required logic. The delay logic should be enhanced to subtract overhead.

In the current implementation, there's a very slow drift visible if you enable the statistics logs and review the timestamps. When fixed, requests should be more consistently grouped into a steady cadence that's not drifting as users are launched and/or a throttled test runs.

goose currently assumes load tests have normal tasks

If you write a load test that only has on_start and/or on_stop tasks, Goose clients will keep displaying the following panic:

thread 'main' panicked at 'index out of bounds: the len is 0 but the index is 0', /rustc/8d69840ab92ea7f4d323420088dd8c9775f180cd/src/libcore/slice/mod.rs:2842:10

This is because the code currently assumes the TaskSet will have 1 or more normal tasks.

handle redirects

Finalize how Goose handles redirects and tracks related information. Decide whether any configuration is Goose-specific, or should happen at the client level (ie, currently w/in Reqwest).

As a followup to #64 GooseRawRequest should store the final URL, as well as how many redirects happened to get there.

increase granulatriy of statistics

Requests per second and failures per second is typically a fractional number, but we only display an integer. This can result in our metrics displays "0 req/s" instead of a more accurate fraction.

redirect of base_url should be sticky

Some websites use multiple domains to serve traffic, redirecting depending on the user's roll. For this reason, Goose needs to respect a redirect of the base_url and subsequent paths should be built from the redirect domain.

For example, if the base_url (ie --host) is set to foo.example.com and the load test requests /login, thereby loading http://foo.example.com/login and this request gets redirected to http://foo-secure.example.com/, subsequent requests made by this client need to be against the new foo-secure.example.com domain. (Further, if the base_url is again redirected, such as when loading http://foo-secure.example.com/logout, the client should again follow for subsequent requests, perhaps in this case back to foo.example.com.)

Load tests can also requests absolute URLs, and if these URLs are redirected it should not affect the base_url of the load test. For example, if foo.example.com is the base url, and the load test requests http://bar.example.com and gets redirected to http://other.example.com subsequent relative requests would still be made against foo.example.com.

If the load test requests

display stats/details before reset when `--reset-stats` is enabled

When enabled, the --reset-stats flag causes Goose to flush all statistics once all GooseUser's have launched. This ensures that the subsequent averages are accurate, and don't include the ramp up time.

This issue is to enhance this so we display all statistics collected before flushing them, if we are displaying statistics. When displaying these stats, it should also be visually clear that all GooseUser's have now launched, and these are the ramp-up statistics.

is it possible to pass paths programatically?

Probably this is not really an issue but really my inability with Rust.

I have a project that reads from an openapi file to generate tests. I thought I could integrate performance tests also via Goose.

First I create the main structs needed

    let mut configuration = GooseConfiguration::default();
    configuration.hatch_rate = 1; 
    configuration.host = config.base_url.clone();
    let goose_attack = GooseAttack::initialize_with_config(configuration);
    let mut task_set = taskset!("ExampleTasks");

This works. BY THE WAY, although there is a default of 1 for hatch_rate in the GooseConfiguration struct, somehow I needed to set it manually to 1 to make it work.

Then in my codebase I have a loop where I execute against every path specified in the openapi spec, I want to pass the path, it would look like this:

for path in paths {
        let task = GooseTask::new(move |s| std::boxed::Box::pin(website_with_path(s, &path))));
        task_set = task_set.register_task(task);
}

goose_attack
    .register_taskset(
        task_set)
    .execute();

And it almost works. If instead of passing path to the task, I pass "", it works. But with path there is a complain that the closure does not map the function signature.

As I said, probably is my inability with Rust, I'm clueless on how to pursue.
Still I thought it would be useful to write an example on how to do this as I would imagine programmatically setting paths should be a common case.
I am willing to write a minimal example once I can make it work.

statistics update from set_failure or set_success are inflating statistics

When calling set_failure or set_success we flag that we're sending an update, however the parent process is not checking this flag. This means each time set_failure or set_success is checked, we're incrementing failure or success, but not decrementing the previous increment resulting in inflated statistics.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.