Giter Club home page Giter Club logo

shotover-proxy's People

Contributors

amrutha-shanbhag avatar bbromhead avatar benbromhead avatar claude-at-instaclustr avatar claudenw avatar conorbros avatar jack-kilrain avatar johndelcastillo avatar justinmclean avatar justinweng-instaclustr avatar rukai avatar slater-ben avatar t-insta avatar tumbarumba avatar xa21x avatar xzilla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shotover-proxy's Issues

Single cluster error with redis-bench

Using redis-bench with -P 1 with new Shotover and below single cluster topology results in this error:

thread 'RPProxy-Thread' panicked at 'index out of bounds: the len is 2 but the index is 2', /home/rust/.cargo/git/checkouts/redis-rs-4c7ae24e54934319/e315517/src/cmd.rs:631:17

Topo file


sources:
redis_prod:
Redis:
batch_size_hint: 10
listen_addr: "127.0.0.1:6379"
chain_config:
redis_chain:
- RedisCluster:
first_contact_points: ["redis://3.211.142.207"]
named_topics:

  • testtopic
    source_to_chain_mapping:
    redis_prod: redis_chain

Implement suppport for -MOVED response

See https://redis.io/topics/cluster-spec for details:

If the hash slot is served by the node, the query is simply processed, otherwise the node will check its internal hash slot to node map, and will reply to the client with a MOVED error, like in the following example:

Example:
GET x
-MOVED 3999 127.0.0.1:6381

The error includes the hash slot of the key (3999) and the ip:port of the instance that can serve the query. The client needs to reissue the query to the specified node's IP address and port. Note that even if the client waits a long time before reissuing the query, and in the meantime the cluster configuration changed, the destination node will reply again with a MOVED error if the hash slot 3999 is now served by another node. The same happens if the contacted node had no updated information.

So while from the point of view of the cluster nodes are identified by IDs we try to simplify our interface with the client just exposing a map between hash slots and Redis nodes identified by IP:port pairs.

Potential difference in client and shotover load balancing behaviours with Redis cluster

There may be differences in the way that a client (with cluster awareness) handles load balancing vs. the way that shotover handles load balancing. For example, I've noticed that redis-bench with cluster mode enabled results in a set load on 2 master nodes, and 2 replica nodes (for a 3 master, 3 replica cluster). It's odd that it doesn't use all 3 master nodes.

Same client (without cluster mode) via shotover results in different behaviour, with sets, 1 master and 1 replica are used, with gets, 1 master is used. Can we make shotover cluster load balancing behaviour configurable (many clients support either master only, or master and replica load balancing), allow all master nodes to be used for better load distributed (for sets), and output confirmation about what nodes are being used (and how) for load balancing on shotover startup.

Support TLS connections

We need to support TLS based TCP connections to support various encrypted database protocols that make use of this.

Our first target will be compatibility with Redis and Cassandra TLS.

The following is TBD:
-> Certificate store format
-> Do we need to support certificate authentication (e.g. identity by cert)
-> Do we need to enforce trust (e.g. client cert signed by a cert shotover trusts).

Scatter/Gather consistency issues

Currently scatter/gather (what's used to support multi-region / active-active) will ignore the results of left over requests when the number of required responses is < total queried nodes (e.g. a quorum).

The next request that hits that chain will get the old response that has been sitting in the socket buffer.

Redis doesn't support request id's (so the driver can recieve out of order responses). So need a way to track responses we need to discard.

Create a WASM transform

Using either something like:

Create a Transform that will pass a Wrapper struct into a user defined WASM function, and also allow the user to call call_next_transform on it to run subsequent transforms.

Initially a basic implementation of the Transform config will look something like:

- WASMTransform:
   - path: myfunction.wasm
   - name: Func

Hot Reload

We want to support the ability to perform a hot reload of shotover (similar to the way HAProxy and Envoy proxy do), where we can "restart" the executable such that:

  • Existing TCP connections from clients are not dropped.
  • Existing TCP connections to upstream servers are not dropped.
  • Configuration changes are reflected.
  • Can be used to upgrade between binary minor versions (e.g. no breaking API / implementation changes).

To acheive such an ability, we would like to introduce the ability for shotover to start in "handover" mode, where it will receive existing tcp socket file descriptions from a currently running shotover process and gradually take over them.

This method of socket handover between processes is best shown in the following repo: https://github.com/benbromhead/hot-reload-example. This repo also contains links to prior art and some good blog posts.

This is likely going to be a change that requires a few changes to shotover. Such as:

  • The ability to map transform chains and associated connections between config changes.
  • The need for transforms to pick up a tcp connection that has already been authenticated etc.
  • When an FD is transferred to the new shotover process, no inflight messages are lost (no data loss).

github actions CI doesnt run on forks

We have some github actions configured at: github/workflows/rusty.yml and .github/workflows/bench.yaml
But neither of these seem to be used?

Maybe due to this?
image

Maturing Shotover - part 1

Next Steps

The following section is a list of tasks / changes required to support the redis caching, authentication for redis and general development on shotover.

Message/Messages structure

  • This is probably the biggest one (in terms of impact of changes + ease of development with shotover).
  • Currently we pass around a vector of messages. This has some speed benefits (Esp wrt to Redis), where we can read a bunch of queued messages and process them really fast + bulk send to upstream.
  • We can also do this with Cassandra messages where Cassandra supports "in-flight" requests per connection. Currently however we don't do this with the codec.
  • The downside is that it dramatically complicates some transforms and how we process them. If we need to match data from a response to the request on the response path. It becomes tricky.
  • We might need to start thinking about whether we treat transform chains as a set of futures that we pass data to, or we build a set of futures per message we receive (ala Tower.rs).
  • Some other areas to explore:
    • Transforms could either be something that act on one or multiple messages, and then we have the transform chain figure out how it needs to iterate on that group of messages.
    • This approach where we have a transform operate on one message in the group of messages then means we could have an issue with how do we get the response for a single message, within the actual transform (given they are not split all the time in the response).
    • Need to really think this one through

Shared config maps:

  • Currently transforms like cassandra_destination, redis cluster and redis cache all need some configuration or understanding of the underlying schema (for cassandra). Currently the only option we have is to define the schema multiple times in the configuration file which is not amazing.
  • To address this we either need to have a shared model, where each transform gets passed a reference or we make the config templateable (yuck?)

Colored connections

  • Currently there is no real mechanism or place to store information about the state of the connection that a transform is attached to. This could be things like, who the authenticated user is, whether the connection has been established etc.
  • We could probably provide a simple mechanism that transforms will have access to, to read and write state about the current connection they are attached to.

Connection setup

  • There is a pre-setup stage for transforms that gets called that could be better used

Stop abusing Clone trait

  • Currently we overuse clone on a transform chain. The main point is when we get a new connection, we clone an existing chain that has never been used. This is problematic as we limit ourselves to only synchronous function calls when we need to setup a new chain for a new connection. We should probably move this to a specific new function that is also async.
  • The other option would be to store the config struct in the source/listener. Then it creates a new transfrom chain from the config (which is already an async function).

Module structure refactor

  • The current set of modules are a bit of a mess and we should probably move to a Cargo workspace with multiple modules to simplify things. This will also dramatically speed up compilation. Currently we have to compile like 400 dependencies which is a bit insane.
  • Another approach would be to start making use of features to enable/disable certain parts of shotover to reduce build / test / iteration cycles.

Error handling audit

There are some areas and some transforms that just straight up swallow or log an error but don't do anything sensible with it.

Cassandra source closes socket

In writing a set of integration tests. Cassandra-rs will fail to connect to shotover configured with a Cassandra source, while cqlsh will connect succesfully.

After a quick look at wireshark, it appears that the cassandra-rs driver (a wrapper around the c/c++ datastax one) tries a few unsupported protocol version, which fail or get downgraded. Cassandra itself wont shutdown the tcp connection, whereas shotover will causeing the driver to error out.

Given Cassandra supports this behaviour (as broken as it seems), shotover needs to support this. Protocol version errors shouldn't cause us to drop the connection.

Tests should panic instead of returning Result

Tests should panic instead of returning Result.
If we panic we get a traceback that includes the exact line number that the error occurred at.
Whereas that information is lost if we return the Result all the way up to the test function.

I'm interested to hear why it was done this way, maybe there are some positives to it I cant think of?

active/active redis clusters with shotover-proxy 0.0.4

With the topology file below (2 Redis clusters), using redis-cli connected to shotover and simple get key operation results in the following error:

Aug 27 02:08:57.754 DEBUG shotover_proxy::transforms::distributed::tuneable_consistency_scatter:
None
Aug 27 02:09:35.141 DEBUG shotover_proxy::transforms::redis_transforms::timestamp_tagging:
Generated eval script for timestamp: return {redis.call('get','a'),redis.call('OBJECT', 'IDLETIME', KEYS[1])}
Aug 27 02:09:35.141 DEBUG shotover_proxy::transforms::redis_transforms::timestamp_tagging: tagging transform got Err(Redis Cluster transform did not have enough information to build a request)
Aug 27 02:09:35.141 DEBUG shotover_proxy::transforms::redis_transforms::timestamp_tagging: response after trying to unwrap -> Err(Redis Cluster transform did not have enough information to build a request)
Aug 27 02:09:35.142 DEBUG shotover_proxy::transforms::distributed::tuneable_consistency_scatter: Got 0, needed 1


sources:
redis_prod:
Redis:
batch_size_hint: 10
listen_addr: "127.0.0.1:6379"
chain_config:
redis_chain:
- ConsistentScatter:
write_consistency: 1
read_consistency: 1
route_map:
one:
- RedisTimestampTagger
- RedisCluster:
first_contact_points: ["redis://3.211.142.207"]
two:
- RedisTimestampTagger
- RedisCluster:
first_contact_points: ["redis://34.231.231.80"]
named_topics:

  • testtopic
    source_to_chain_mapping:
    redis_prod: redis_chain

Examples cleanup

Continuing discussion from #109 (comment)

Kuangda made a good point:

Question is, should we expect the project root to be a somewhat usable working directory for running the application?
In the (managed service) Docker container, those files are located in /opt/shotover-proxy/config/.
On second thought, it might be preferable to leave the files and add some docs instead.

If we do keep the config folder for that reason, then we will need to add a test + docker config for it so that CI can ensure the config/ folder is always valid.

Given that complication maybe its better to just document that the user should copy an examples topology.yaml file to config/topology.yaml from the example most relevant to them, maybe recommend redis-passthrough if they dont know what they want.
The advantages here are:

  • it directs the user to the other examples for future reference
  • gives the user a complete working example with a docker config.
    • without a docker config the user still doesn't have a way to easily run the provided config.
  • The user probably wants to use shotover with a specific technology (redis or cassandra), if the default example is with a different technology that can be offputting.
  • dont need an extra integration test, with paths organized differently, just for the default config/ folder

So I am proposing the following changes:

  • Rename examples/*/config.yaml to examples/*/topology.yaml
  • Delete the config/topology.yaml file.
  • Document how to setup for cargo run in the root directory.

examples + integration tests + cargo run will all continue to use the config file at config/config.yaml

Protocol data replay tests

Support the ability to test codecs / protocol implementations by replaying a pcap file and comparing to the expected serialized version.

Properly fix race conditions in tests

The current implementation of integration tests relies on a thread::sleep(4s) to avoid race conditions on certain configurations but I think we can do better than this.

redis cluster container startup

Removing this sleep allows tests that dont use redis clustering to pass such as test_pass_through.
However test cases that use clustering blow up due to a race condition, the issue is follows:

  1. When redis cluster instances have just started up they can receive queries over tcp but have not yet discovered the other instances in the cluster.
  2. Querying CLUSTER SLOTS will return empty results.
  3. After a few seconds the instances have discovered each other and CLUSTER SLOTS will return a list of all the instances in the cluster.

If shotover is started while we are still at 2 then shotover will fail to startup as it needs to get a list of instances in order to startup.

I am thinking we should handle a retry for this in the shotover application instead of the tests as this seems like a likely scenario in production.
Shotover does update its internal slot map when it hits issues so accidentally querying too early and getting an incomplete list of cluster instances should not cause issues.
@benbromhead thoughts?

generic container startup check

I have written generic logic to block in DockerCompose::new until all ports exposed by the docker compose file are accepting connections.
But on my machine this retry logic doesn't ever have to actually retry.
So I suspect the docker-compose up command is actually blocking until the containers are completely up.
If this is the case I'll abandon this idea for now as it would be unnecessary.
If we do find that we are hitting race conditions where the containerized application isnt accepting connections yet, then we can investigate adding it at that point.

use compose_yml::v2::{File, Ports, Protocol};
use std::net::TcpStream;

impl DockerCompose {
    pub fn new(file_path: &str) -> Self {
        DockerCompose::clean_up(file_path).unwrap();

        info!("bringing up docker compose {}", file_path);

        run_command("docker-compose", &["-f", file_path, "up", "-d"]).unwrap();

        let compose = File::read_from_path(file_path).unwrap();
        while !DockerCompose::is_compose_ready(&compose) {
            thread::sleep(time::Duration::from_millis(100));
        }

        DockerCompose {
            file_path: file_path.to_string(),
        }
    }

    fn is_compose_ready(compose: &File) -> bool {
        compose.services.values().all(|service| {
            service.ports.iter().all(|port| {
                let port = port
                    .value()
                    .expect("dont use env var interpolation in examples");
                if let Protocol::Tcp = port.protocol {
                    match port.host_ports {
                        Some(Ports::Port(port)) => {
                            println!("{}", port);
                            TcpStream::connect(("localhost", port)).is_ok()
                        }
                        _ => {
                            unimplemented!("Figure out how to handle this when we actually need it")
                        }
                    }
                } else {
                    true
                }
            })
        })
    }
}

Full implementation: rukai@a808c86

Update dependencies

Update dependencies to their latest versions.
e.g. I can see that the metrics dep is 0.12 but the latest is 0.17

Once the repo goes public we can track out of date dependencies on https://deps.rs/ but for now I think we just need to manually check the version ourselves.

Remove redis pub/sub code

Currently there are vestiges of support for redis pub/sub. This needs a rethink about how the proxy will support pub/sub traffic flows as these largely don't follow a request/response model which the proxy kind of expects.

Tests should unwrap Result::err returned by Runner::run_spawn

I thought I had resolved tests not reporting shotover Err in #126 but apparently not.

There are two problems we want to resolve here:

  1. the shotover shutdown procedure is not tested.
  2. when run_chains returns Err the error is not displayed in tests.

We need to do one of the following:

  • force shotover to end (there is a mechanism for that right?) and then we can wait on the join handle and then unwrap it
    • if this works, then it sounds like the preferred approach to me
  • when the type returned by run_shotover_with_topology is destroyed then poll the join handle.
    • if finished then unwrap the result
    • otherwise just ignore it

Single cluster config error with Jedis pipelining

Added pipelines to my Jedis client, worked ok directly, but via shotover I get this error:

redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out


sources:
redis_prod:
Redis:
batch_size_hint: 10
listen_addr: "127.0.0.1:6379"
chain_config:
redis_chain:
- RedisCluster:
first_contact_points: ["redis://3.211.142.207"]
named_topics:

  • testtopic
    source_to_chain_mapping:
    redis_prod: redis_chain

Remove git dependencies

Currently the following dependencies are sourced from git forks made by ben:

We should upstream or remove reliance on the changes in these repos.

Benchmarking performance appears to degrade the longer they run

I first noticed this when testing something else on a mirroring enabled cluster.

The basic summary is as we turn up the number of operations (-n), the performance degrades pretty significantly, as seen below.

~ $ sudo docker exec -ti redis redis-benchmark -p 6378 -n 100000 -t set -r 100000000 -P 8 -c 50 --threads 10 -q
SET: 62427.95 requests per second

~ $ sudo docker exec -ti redis redis-benchmark -p 6378 -n 110000 -t set -r 100000000 -P 8 -c 50 --threads 10 -q
SET: 46743.10 requests per second

~ $ sudo docker exec -ti redis redis-benchmark -p 6378 -n 150000 -t set -r 100000000 -P 8 -c 50 --threads 10 -q
SET: 27566.50 requests per second

Environment:
Running locally on 1 node of a 3 node t3.medium cluster.

Topology.yaml

sources:
  redis_prod:
    Redis: {batch_size_hint: 4, listen_addr: '0.0.0.0:6378', connection_limit: 20000,
      hard_connection_limit: true}
chain_config:
  redis_chain:
  - MPSCTee:
      behavior: IGNORE
      buffer_size: 10000
      chain:
      - QueryTypeFilter: {filter: Read}
      - Coalesce:
          max_behavior: {COUNT: 2000}
      - MPSCForwarder:
          buffer_size: 100
          async_mode: true
          timeout_micros: 10000
          chain:
          - QueryCounter: {name: DR chain}
          - PoolConnections:
              name: RedisCluster-DR-subchain
              parallelism: 256
              chain:
              - RedisCluster:
                  first_contact_points: ['34.211.224.239:6379']
  - QueryCounter: {name: Main chain}
  - PoolConnections:
      name: RedisCluster-Main-subchain
      parallelism: 512
      chain:
      - RedisCluster:
          first_contact_points: ['34.204.221.1:6379']
named_topics: {example: 10}
source_to_chain_mapping: {redis_prod: redis_chain}

Possibly odd behaviour/reduced throughput with older version of redis-bench/client protocol

While testing shotover I noticed very low throughputs with redis-bench c.f. direct use. Realised I was using an older version of redis-bench (2019) which "worked" with shotover, so reduced throughput was only observable difference. Ungrading to current redis-bench fixed the problem. Potentially therefore older Redis clients may exhibit similar problems with no errors?

Testing shotover behaviour with Redis cluster node failures

Using redis-bench as test client with shover and a 6 node Instaclustr Redis cluster (3 masters, 3 replicas), conducted a test involving killing one of the Redis master threads (with cli shutdown command, which results in the replica taking over as master, and the master becoming the replica after a short delay).

With redis-bench directly connected to the cluster it stops with an error.

Connected via shotover there's no redis-bench error and it keeps working, but at a significantly slower throughput - from 12,000 to 2,000 sets/s. Killing and starting redis-bench again solves the throughput problem.

Shotover spits out the following error (xN times):

Aug 24 02:02:18.200 ERROR shotover_proxy::server: chain processing error - Got connection error with cluster Connection refused (os error 111)

Cassandra row level encryption

Currently shotover can support a basic example form of row level (or cell level) encryption. This issue will track the outstanding tasks required to bring it up to a production grade quality.

rustfmt

Kuangda brought up the topic of rustfmt to me and I thought I would move the discussion here.

For now we dont need rustfmt as we are few developers and I personally prefer manually formatted code over rustfmt code.
However as the project grows I think we will find reviewing formatting time consuming and a mismatch of styles can occur.
So I think at some point we will want to run rustfmt across the entire repo and enforce it via CI.

So should we rustfmt'ify the code base now while PRs are low?
Or should we hold off on the decision till later?

Refactor TestContext into ShotoverManager

TestContext has a very general sounding name but is actually only used for creating redis connections to shotover.
Lets move all of its functionality into the ShotoverManager type.

So the API I'm thinking of would look something like this in shotover-proxy/tests/helpers/mod.rs

impl ShotoverManager {
    pub fn redis_connection() -> redis::Connection {
        unimplemented!("take implementation from TestContext::new_internal")
    }
    
    // In the future we will also add methods like cassandra_connection etc
}

We can also delete shotover-proxy/tests/redis_int_tests/support.rs as encode_value in support.rs is unused.

The test_* helper methods in basic_driver_tests.rs will be refactored to take a redis::Connection as an argument. (or a ShotoverRunner if they really need to make a connection per test case for some reason)

Currently shotover-proxy/tests/helpers/mod.rs is duplicated at shotover-proxy/benches/helpers.rs to avoid circular dependencies between shotover-proxy and test-helpers crates.
Lets try and fix that in this PR.
I think the best solution is to set a custom module path in redis_benches.rs like this:

#[path="../tests/helpers/mod.rs"]
mod helpers;

A little hacky but seems better than the alternatives.

Thoughts on user provided transforms

There has been a lot of work/discussion towards scriptable or plugin-driven transforms.
But this sounds like it could be the wrong approach.
The transforms have the same requirements as shotover itself which needs to be:

  • extremely fast
  • correct
  • cannot crash or otherwise drop data

The perfect candidate for this is rust, we obviously believe that because shotover itself is written in rust.

Even so we could still use rust compiled to wasm or rust dylib plugins.
That way developers can use rust or if they really really want they can use another language.

But doing so has several disadvantages:

  • poor versioning - by using a dylib or wasm we step outside of cargo's robust semantic versioning and dependency management.
  • adds complexity to the implementation resulting in a large maintenance burden and surface area for bugs
  • when writing plugins in another language:
    • mismatch between the provided API and the way code is written in the plugin language - can be reduced by bindings that provide abstractions suitable for the plugin language - but again, requires effort to implement + maintain
  • when writing plugins in rust:
    • for dylib we lose our type safety because we have to expose an FFI API
    • for wasm we have to serialize and deserialize our types into bincode as we can only communicate over an array of bytes between the host and wasm

So how can we give shotover the flexibility it needs without plugins/wasm?

We can expose shotover as a library crate exposing a builder pattern API.
Then the user of shotover implements their own binary crate using this shotover library.

e.g. They could run vanilla shotover with just the following

fn main() {
    Shotover::new().run();
}

This would be a very batteries included approach. run would be responsible not just for running shotover but it would also start the async runtime, read the config from disk, start the logger etc.

We could still provide a binary release of shotover by compiling this simple example. This binary release could do everything except implement custom transforms.

Then to extend shotover with their own transforms, developers implement a Transform trait and register it like this:

struct MyTransform { }

impl Transform for MyTransform {
    ...
}

fn main() {
    Shotover::new()
        .register_transform(MyTransform::new())
        .run();
}

As a bonus we could allow bundling config into the binary like this:

fn main() {
    Shotover::new()
        // config guaranteed to exist at compile time
        .set_config(include_str!("config.yaml"));
        .run();
}

or we could even go so far as to have the config constructed in our rust builder pattern instead of yaml:

fn main() {
    Shotover::new()
        // config structure gauranteed to be correct at compile time
        .add_source(Source { name: "foo", batch_size_hint: 100, listen_addr: "127.0.0.1:6379");
        .add_chain(Config {
            name: "bar",
            sources: vec!("foo"),
            transforms: vec!(
                MyTransform::new(),
                StandardTransform::new(TransformConfig { contact_points: vec!("127.0.0.1:2920", "127.0.0.1:2921") })
            )
        });
        .run();
}

This particular idea is a little out there. I think just having a yaml file is more readable.

An example of this sort of "rust API instead of a binary" approach can be seen in a personal project I built, which provides a rust API for compiling gameboy asm: https://github.com/rukai/ggbasm
And of course rust webservers all use a similar approach. e.g. https://github.com/SergioBenitez/Rocket

Cassandra integration testing

Like the redis integration tests, we need to implement a set of Cassandra tests from a chosen rust Cassandra driver.

Refactor message clock

Currently only redis_codec_destination uses it to fast forward messages it misses.

This is really only an issue when used in conjunction with consistent scatter transform (right now the only thing that ignores upstream results and won't even poll a future for it).

We could probably scrap the whole message clock thing if we moved responsibility for draining ignored futures back into consistent scatter, this would like be the best course of action.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.