Giter Club home page Giter Club logo

burry.sh's Introduction

burry

Go Report Card

This is burry, the BackUp & RecoveRY tool for cloud native infrastructure services. Use burry to back up and restore critical infrastructure base services such as ZooKeeper and etcd. More…

burry overview

burry currently supports the following infra services and storage targets:

ZooKeeper etcd Consul
Amazon S3 B/R B/R B/R
Azure Storage []/[] []/[] []/[]
Google Storage B/R B/R B/R
Local B/R B/R B/R
Minio* B/R B/R B/R
TTY** B/- B/- B/-
 B  ... backups supported
 R  ... restores supported
 -  ... not applicable
 [] ... not yet implemented
 *) Minio can be either on-premises or in the cloud, but always self-hosted. See also https://www.minio.io
**) TTY effectively means it's not stored at all but rather dumped on the screen; useful for debugging, though.

Note:

  • burry is WIP, please use with care
  • if you want to learn more about the design (goals/assumptions), check out the background notes
  • if you want to hack (rather than simply use) burry, check out the development and testing notes

Contents:

Install

Currently, only 'build from source' install is available (note: replace GOOS=linux with your platform):

$ go get github.com/mhausenblas/burry.sh
$ GOOS=linux go build
$ mv burry.sh burry
$ godoc -http=":6060" &
$ open http://localhost:6060/pkg/github.com/mhausenblas/burry.sh/

See also GoDocs.

Use

The general usage is:

$ burry --help
Usage: burry [args]

Arguments:
  -b, --burryfest
        Create a burry manifest file .burryfest in the current directory.
        The manifest file captures the current command line parameters for re-use in subsequent operations.
  -c, --credentials string
        The credentials to use in format STORAGE_TARGET_ENDPOINT,KEY1=VAL1,...KEYn=VALn.
        Example: s3.amazonaws.com,ACCESS_KEY_ID=...,SECRET_ACCESS_KEY=...,BUCKET=...,PREFIX=...,SSL=...
  -e, --endpoint string
        The infra service HTTP API endpoint to use.
        Example: localhost:8181 for Exhibitor
  -f, --forget boolean 
        Forget existing data
  -i, --isvc string
        The type of infra service to back up or restore.
        Supported values are [etcd zk consul] (default "zk")
  -o, --operation string
        The operation to carry out.
        Supported values are [backup restore] (default "backup")
  -s, --snapshot string
        The ID of the snapshot.
        Example: 1483193387
  -t, --target string
        The storage target to use.
        Supported values are [local minio s3 tty local] (default "tty")
      --timeout=1: The infra service timeout, by default 1 second
  -v, --version
        Display version information and exit.

Note: If you want to re-use your command line parameters, use the --burryfest or -b argument: this creates a manifest file .burryfest in the current directory, capturing all your settings. If a manifest .burryfest exists in the current directory subsequent invocations use this and hence you can simply execute burry, without any parameters. Remove the .burryfest file in the current directory to reset stored command line parameters.

An example of a burry manifest file looks like:

{
    "svc": "etcd",
    "svc-endpoint": "etcd.mesos:1026",
    "target": "local",
    "credentials": {
        "target-endpoint": "",
        "params": []
    }
}

Note that for every storage target other than tty a metadata file .burrymeta in the (timestamped) archive file will be created, something like:

{
  "snapshot-date": "2016-12-31T14:52:42Z",
  "svc": "zk",
  "svc-endpoint": "leader.mesos:2181",
  "target": "s3",
  "target-endpoint": "s3.amazonaws.com"
}

Backups

In general, since --operation backup is the default, the only required parameter for a backup operation is the --endpoint. That is, you'll have to provide the HTTP API of the ZooKeeper or etcd you want to back up:

$ burry --endpoint IP:PORT (--isvc etcd|zk) (--target tty|local|s3) (--credentials STORAGE_TARGET_ENDPOINT,KEY1=VAL1,...,KEYn=VALn)

Some concrete examples follow now.

Screen dump of local ZooKeeper content

To dump the content of a locally running ZK onto the screen, do the following:

# launching ZK:
$ docker ps
CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS              PORTS                                                                                            NAMES
9ae41a9a02f8        mbabineau/zookeeper-exhibitor:latest   "bash -ex /opt/exhibi"   2 days ago          Up 2 days           0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp, 0.0.0.0:8181->8181/tcp   amazing_kilby

# dump to screen:
$ DEBUG=true ./burry --endpoint localhost:2181
INFO[0000] Using burryfest /home/core/.burryfest  func=init
INFO[0000] My config: {InfraService:zk Endpoint:localhost:2181 StorageTarget:tty Creds:{StorageTargetEndpoint: Params:[]}}  func=init
INFO[0000] /zookeeper/quota:                             func=reapsimple
INFO[0000] Operation successfully completed.             func=main

See the development and testing notes for the test setup.

Back up etcd to local storage

To back up the content of an etcd running in a (DC/OS) cluster to local storage, do:

# create the backup:
$ ./burry --endpoint etcd.mesos:1026 --isvc etcd --target local
INFO[0000] My config: {InfraService:etcd Endpoint:etcd.mesos:1026 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}}  func=init
INFO[0000] Operation successfully completed. The snapshot ID is: 1483194168  func=main

# check for the archive:
$ ls -al 1483194168.zip
-rw-r--r--@ 1 mhausenblas  staff  750 31 Dec 14:22 1483194168.zip

# explore the archive:
$ unzip 1483194168.zip && cat 1483194168/.burrymeta | jq .
{
  "snapshot-date": "2016-12-31T14:22:48Z",
  "svc": "etcd",
  "svc-endpoint": "etcd.mesos:1026",
  "target": "local",
  "target-endpoint": "/tmp"
}

See the development and testing notes for the test setup.

Back up DC/OS system ZooKeeper to Amazon S3

To back up the content of the DC/OS system ZooKeeper (supervised by Exhibitor), do the following:

# let's first do a dry run:
$ ./burry --endpoint leader.mesos:2181
INFO[0000] My config: {InfraService:zk Endpoint:leader.mesos:2181 StorageTarget:tty Creds:{StorageTargetEndpoint: Params:[]}}  func=init
INFO[0006] Operation successfully completed.             func=main

# back up into Amazon S3:
$ ./burry --endpoint leader.mesos:2181 --target s3 --credentials s3.amazonaws.com,ACCESS_KEY_ID=***,SECRET_ACCESS_KEY=***
INFO[0000] My config: {InfraService:zk Endpoint:leader.mesos:2181 StorageTarget:s3 Creds:{InfraServiceEndpoint:s3.amazonaws.com Params:[{Key:ACCESS_KEY_ID Value:***} {Key:SECRET_ACCESS_KEY Value:***}]}}}  func=init
INFO[0008] Successfully stored zk-backup-1483166506/latest.zip (45464 Bytes) in S3 compatible remote storage s3.amazonaws.com  func=remoteS3
INFO[0008] Operation successfully completed. The snapshot ID is: 1483166506  func=main

See the development and testing notes for the test setup. Note: in order to back up to Google Storage rather than to Amazon S3, use --credentials storage.googleapis.com,ACCESS_KEY_ID=***,SECRET_ACCESS_KEY=*** in above command. Make sure that you have Google Storage as your default project and Interoperability enabled; see also settings in console.cloud.google.com/storage/settings.

Back up etcd to Minio

To back up the content of an etcd running in a (DC/OS) cluster to Minio, do:

$ ./burry --endpoint etcd.mesos:1026 --isvc etcd --credentials play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG --target s3
INFO[0000] My config: {InfraService:etcd Endpoint:etcd.mesos:1026 StorageTarget:s3 Credentials:}  func=init
INFO[0001] Successfully stored etcd-backup-1483173687/latest.zip (674 Bytes) in S3 compatible remote storage play.minio.io:9000  func=remoteS3
INFO[0001] Operation successfully completed. The snapshot ID is: 1483173687  func=main

See the development and testing notes for the test setup. Note: the credentials used above are from the public Minio playground.

Restores

For restores you MUST set --operation restore (or: -o restore) as well as provide a snapshot ID with --snapshot/-s. Note also that you CAN NOT restore from screen, that is, --target/-t tty is an invalid choice:

$ burry --operation restore --target local|s3 --snapshot sID (--isvc etcd|zk) (--credentials STORAGE_TARGET_ENDPOINT,KEY1=VAL1,...,KEYn=VALn)

Restore etcd from local storage

In the following, we first create a local backup of an etcd cluster, then simulate failure by deleting a key and then restore it:

# let's first back up etcd:
$ ./burry -e etcd.mesos:1026 -i etcd -t local -b
INFO[0000] Selected operation: BACKUP                    func=init
INFO[0000] My config: {InfraService:etcd Endpoint:10.0.1.139:1026 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}}  func=init
INFO[0000] Operation successfully completed. The snapshot ID is: 1483383204  func=main

# now, let's destroy a key:
$ curl etcd.mesos:1026/v2/keys/foo -XDELETE
{"action":"delete","node":{"key":"/foo","modifiedIndex":16,"createdIndex":15},"prevNode":{"key":"/foo","value":"bar","modifiedIndex":15,"createdIndex":15}}

# restore it from the local backup:
$ ./burry -o restore -e etcd.mesos:1026 -i etcd -t local -s 1483383204
INFO[0000] Using burryfest /tmp/.burryfest  func=init
INFO[0000] Selected operation: RESTORE                   func=init
INFO[0000] My config: {InfraService:etcd Endpoint:10.0.1.139:1026 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}}  func=init
INFO[0000] Restored /foo                                 func=visitETCDReverse
INFO[0000] Operation successfully completed. Restored 1 items from snapshot 1483383204  func=main

# ... and we're back to normal:
$ curl 10.0.1.139:1026/v2/keys/foo
{"action":"get","node":{"key":"/foo","value":"bar","modifiedIndex":17,"createdIndex":17}}

See the development and testing notes for the test setup.

Restore Consul from Minio

In the following, we first create a backup of an Consul K/V store in Minio, then simulate failure by deleting a key and then restore it:

# let's first back up the Consul K/V store to Minio:
$ ./burry -e jump:8500 -i consul -t s3 -c play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
INFO[0000] Selected operation: BACKUP                    func=main
INFO[0000] My config: {InfraService:consul Endpoint:jump:8500 StorageTarget:s3 Creds:{StorageTargetEndpoint:play.minio.io:9000 Params:[{Key:ACCESS_KEY_ID Value:Q3AM3UQ867SPQQA43P2F} {Key:SECRET_ACCESS_KEY Value:zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG}]}}  func=main
INFO[0000] Operation successfully completed. The snapshot ID is: 1483448835  func=main

# now, let's destroy a key
$ curl jump:8500/v1/kv/foo -XDELETE

# restore it from the local backup:
$ ./burry-o restore -e jump:8500 -i consul -t s3 -s 1483448835 -c play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
INFO[0000] Selected operation: RESTORE                   func=main
INFO[0000] My config: {InfraService:consul Endpoint:jump:8500 StorageTarget:s3 Creds:{StorageTargetEndpoint:play.minio.io:9000 Params:[{Key:ACCESS_KEY_ID Value:Q3AM3UQ867SPQQA43P2F} {Key:SECRET_ACCESS_KEY Value:zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG}]}}  func=main
INFO[0000] Restored foo                                  func=visitCONSULReverse
INFO[0000] Restored hi                                   func=visitCONSULReverse
INFO[0000] Operation successfully completed. Restored 2 items from snapshot 1483448835  func=main

# ... and we're back to normal:
$ curl jump:8500/v1/kv/foo?raw
bar

See the development and testing notes for the test setup. Note: the credentials used above are from the public Minio playground.

Release history

  • v0.4.0: support for backing up to and restoring from Google Storage
  • v0.3.0: support for backing up and restoring Consul from local storage and S3/Minio
  • v0.2.0: support for restoring ZK and etcd from local storage and S3/Minio
  • v0.1.0: support for backing up ZK and etcd to screen, local storage and S3/Minio

burry.sh's People

Contributors

eye0fra avatar mhausenblas avatar mool avatar nlamirault avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

burry.sh's Issues

SSL/TLS support?

Hello,

Wanted to give burry a try but our ETCd cluster is full HTTPs.

When passing endpoing hostname:2379 burry says:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

If I specifically mention https:// in front I get:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: dial tcp: lookup https: no such host
Is burry capable of SSL/TLS?

Thank you!

Unable to restore Zookeeper in Kubernetes while running Kafka

Hey
Great tool. I just have one issue with it.
I am running a Kubernetes cluster with a Zookeeper and Kafka. Both of these are running single instance clusters. Creating the backup from zookeeper and pushing it to s3 works flawlessly, but the problem I have is that I am unable to restore a fresh zookeeper from that backup when kafka is running. This is what I ran:

burry --endpoint=localhost:2181 --operation=restore --target=s3 --snapshot=1534148038 --credentials=s3.amazonaws.com,ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID,SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY,BUCKET=Example_bucket

DEBU[0005] Visited /brokers/topics                       func=visitZKReverse
DEBU[0005] Attempting to insert /brokers/topics/ExampleTopic-Source as leaf znode  func=visitZKReverse
INFO[0010] Restored /brokers/topics/ExampleTopic-Source  func=visitZKReverse
DEBU[0010] Value: {"version":1,"partitions":{"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"15":[0],"11":[0],"9":[0],"22":[0],"26":[0],"13":[0],"24":[0],"16":[0],"5":[0],"10":[0],"21":[0],"6":[0],"1":[0],"17":[0],"25":[0],"14":[0],"31":[0],"0":[0],"20":[0],"27":[0],"2":[0],"18":[0],"30":[0],"7":[0],"29":[0],"3":[0],"28":[0]}}  func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source  func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source/content  func=visitZKReverse
DEBU[0010] Attempting to insert /brokers/topics/ExampleTopic-Source/partitions as leaf znode  func=visitZKReverse
ERRO[0011] zk: node already exists:/brokers/topics/ExampleTopic-Source/partitions  func=visitZKReverse
ERRO[0011] zk: node already exists                       func=restoreZK
ERRO[0012] Operation completed with error(s).            func=main

From what I can tell it seems like burry first successfully restores /brokers/topics/ExampleTopic-Source but before it manages to restore brokers/topics/ExampleTopic-Source/partitions, kafka has already created that node.

Is this a known limitation or am I doing something wrong? Thanks!

Implement restore capabilities

Currently, data can only be extracted from ZK/etcd and stored in a number of places but other than looking at it, you can't use burry to restore the state of an infra service.

Backup operation is taking too long

Awesome Project! I just have one issue with it... I have an application being orchestrated by Zookeeper and the backups of zookeeper are taking about 3 hours to run completely. There is a way to make it faster? There is a parameter to collect specific znodes from zookeeper?

testing of etcd

Great tool. I have a question about the etcd backup scripts. Could you please help?

  1. What are the pre-req of running etcd backup ?
  2. How did you test the backup have restored all the key-values pairs that is stored in etcd?
    For example if I delete some of the key value pairs before taking a backup how did you ensure that the keys-values is restored after the etcd restore?

I tried with etctdctl command for V3 to retrieve all the keys-value pairs stored before backup and verify the data integrity after the backup . But I could not find a single command to achieve this?
https://coreos.com/etcd/docs/latest/dev-guide/interacting_v3.html

When restoring zk data, node data cannot be overwritten

I use the following commands to backup and restore zk data. For an existing path, the data cannot be overwritten, even if the -f parameter is added.

burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589

ZK authentication

Hello,
I'm not sure if this is a missing feature or if I'm doing something wrong however I'm trying to backup some nodes which have ACL's enabled. I have the necessary credentials but can't understand how/where to use them:

root@f2d067caa672:/go# burry -e docker.for.mac.localhost:2181 -t local
INFO[0000] Selected operation: BACKUP                    func=main
2019/03/25 19:36:48 Connected to 192.168.65.2:2181
2019/03/25 19:36:49 Authenticated: id=72312109170556965, timeout=4000
2019/03/25 19:36:49 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root                                func=store
ERRO[0000] zk: not authenticated                         func=visitZK
ERRO[0000] zk: not authenticated                         func=visitZK
ERRO[0000] zk: not authenticated                         func=visitZK
INFO[0000] Operation successfully completed. The snapshot ID is: 1553542608  func=main
root@f2d067caa672:/go#

Is this possible with burry?
Thanks

Upload backup snapshot as latest/last also

In case of remote upload, i would like to have a pointer or link to latest/last snapshot uploaded.
Since we can automate this post burry backup execution, grep snapshot id and copy s3 object with snaphot id to latest/last.
But doing it by burry itself will be great for automations of both backup and restore.

Znode data for kafka topic znode not being copied.

Hi,

Great tool, by the way. I've been experimenting with it a little to replicate Kafka config stored in Zookeeper (with Exhibitor), to basically allow a Kafka cluster to move onto new Zookeeper without losing information about topic partition assignment.

At the moment I'm running into an issue where the structure of the Kafka data being stored is being replicated correctly but the data in the topic znode is not copied.

My set up is:

  • Exhibitor cluster 1 with 5 nodes
  • Kafka cluster with 6 brokers connection to Exhibitor cluster 1
  • Exhibitor cluster 2 with 5 nodes - this is the destination for the data I want to copy from Exhibitor cluster 1

Procedure:

  • set up the topology as above
  • create a Kafka topic called test with 6 partitions and replication factor 3. This results in topic znode shown in the screen shot below, with the partition to Kafka broker mapping as data on the node.
    screen shot 2017-05-17 at 17 22 35
  • copy Exhibitor cluster 1 data to local storage with burry -e xx.xx.xx.xx:2181 -t local
  • copy the backup dump to Exhibitor cluster 2 with burry -o restore -e yy.yy.yy.yy:2181 -t local -s <snapID>
  • Exhibitor cluster 2 now has the correct znode structure but the data in the topic node is missing (as shown below).
    screen shot 2017-05-17 at 17 22 52

Is there anything obviously wrong with my usage here, or something else I'm missing? Thanks! :)

custom snapshot id does not work

Looks like --snapshot flag makes no effect. No matter what I put there, burry keeps using default snapshot IDs.

/tmp # burry -v
This is burry in version 0.4.0
/tmp # burry --endpoint=zk-1.zk:2181 --isvc=zk --target=local --snapshot="123"
INFO[0000] Selected operation: BACKUP                    func=main
INFO[0000] My config: {InfraService:zk Endpoint:zk-1.zk:2181 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}}  func=main
2018/03/27 14:12:32 Connected to 172.17.0.5:2181
2018/03/27 14:12:32 Authenticated: id=171807602279776271, timeout=4000
2018/03/27 14:12:32 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root                                func=store
INFO[0000] Operation successfully completed. The snapshot ID is: 1522159952  func=main
/tmp # ls -l
total 4
-rw-r--r--    1 root     root          1039 Mar 27 14:12 1522159952.zip

aux.go file is invalid for windows

On windows system aux is reserved and cannot be used as a file name.
MSDN

Windows user cannot clone this repository without going through bash for windows

a quick solution would be to just rename aux.go file to anything else

S3 Instance Profile

Is it possible to add support for EC2 instance profile to use in AWS IAM authentication?

Temporary nodes in zk become persistent nodes after recovery

I use the following commands to backup and restore zk data. After the operation, it was found that the data of the temporary node was still there, which caused my application to be unable to reconnect to zk.

burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589

Target parameter is misleading

The README says that --target can take any of these values: local, s3, minio or tty. So I assume that you must set "minio" in case you want to use it as target.

However, in the example "Back up etcd to Minio" you use --target s3:

$ ./burry --endpoint etcd.mesos:1026 --isvc etcd --credentials play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG --target s3

Is that ok? I'm asking this because I'm not able to make burry work with minio using --target minio and I don't know if this might be the cause. I'm getting a "server gave HTTP response to HTTPS" error:

time="2018-11-08T22:30:57Z" level=info msg="Selected operation: BACKUP" func=main
2018/11/08 22:30:57 Connected to 100.70.205.228:2181
2018/11/08 22:30:57 Authenticated: id=101037703899381862, timeout=4000
2018/11/08 22:30:57 Re-submitting `0` credentials after reconnect
time="2018-11-08T22:30:57Z" level=info msg="Rewriting root" func=store
time="2018-11-08T22:31:00Z" level=fatal msg="Get https://127.0.0.1:9000/my-backups-bucket/?location=: http: server gave HTTP response to HTTPS client" func=toremoteS3

I'm using burry 0.4.0 and minio RELEASE.2018-11-06T01-01-02Z just in case it helps. Thanks.

Add v3 data support

Hello, I've been using this tool for a while and it's working great. But I recently upgraded to etcdv3 and seems like the v3 data is not being backed up.

It would be nice to have support for this

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.