Giter Club home page Giter Club logo

go-textile's Introduction

** WARNING **: go-textile has been replaced by go-threads and textile.


go-textile

Made by Textile Chat on Slack GitHub license Go Report Card CircleCI branch standard-readme compliant

Textile implementation in Go

This repository contains the core API, daemon, and command-line client, as well as bindings for mobile (iOS/Android) applications.

Textile provides encrypted, recoverable, schema-based, and cross-application data storage built on IPFS and libp2p. We like to think of it as a decentralized data wallet with built-in protocols for sharing and recovery, or more simply, an open and programmable iCloud.

Please see Textile Docs for more.

Join us on our public Slack channel for news, discussions, and status updates. Check out our blog for the latest posts and announcements.

Table of Contents

Security

Textile is still under heavy development and no part of it should be used before a thorough review of the underlying code and an understanding that APIs and protocols may change rapidly. There may be coding mistakes and the underlying protocols may contain design flaws. Please let us know immediately if you have discovered a security vulnerability.

Please also read the security note for go-ipfs.

Background

Textile is a set of tools and trust-less infrastructure for building censorship resistant and privacy preserving applications.

While interoperable with the whole IPFS peer-to-peer network, Textile-flavored peers represent an additional layer or sub-network of users, applications, and services.

With good encryption defaults and anonymous, disposable application services like cafes, Textile aims to bring the decentralized internet to real products that people love.

Continue reading about Textile...

Install

env GO111MODULE=on go get github.com/textileio/go-textile
env GO111MODULE=on go install github.com/textileio/go-textile/cmd/textile

Installation instructions for pre-built binaries are in the docs.

Usage

Go to https://godoc.org/github.com/textileio/go-textile.

The Tour of Textile goes through many examples and use cases. textile --help provides a quick look at the available APIs. For a full overview of every CLI command available, refer to our Command Line Documentation.

Requirements

  • go >= 1.12

Extra setup steps are needed to build the bindings for iOS or Android, as gomobile does not yet support go modules. You'll need to move the go-textile source into your GOPATH (like pre-go1.11 development), before installing and initializing the gomobile tools:

go get golang.org/x/mobile/cmd/gomobile
gomobile init

Now you can execute the iOS and Android build tasks below. For the other build tasks, the source must not be under GOPATH. Go 1.13 is supposed to bring module support to gomobile, at which point we can remove this madness!

Install dependencies:

make setup

Build textile:

make textile

Run unit tests:

make test

Build the iOS framework:

make ios

Build the Android Archive Library (aar):

make android

Build the swagger docs:

make docs

Contributing

This project is a work in progress. As such, there's a few things you can do right now to help out:

  • Ask questions! We'll try to help. Be sure to drop a note (on the above issue) if there is anything you'd like to work on and we'll update the issue to let others know. Also get in touch on Slack.
  • Open issues, file issues, submit pull requests!
  • Perform code reviews. More eyes will help a) speed the project along b) ensure quality and c) reduce possible future bugs.
  • Take a look at the code. Contributions here that would be most helpful are top-level comments about how it should look based on your understanding. Again, the more eyes the better.
  • Add tests. There can never be enough tests.

Before you get started, be sure to read our contributors guide and our contributor covenant code of conduct.

Changelog

Changelog is published to Releases.

License

MIT

go-textile's People

Contributors

andrewxhill avatar asutula avatar balupton avatar carsonfarmer avatar flyskywhy avatar hoijui avatar ilpaijin avatar jsign avatar maxnordlund avatar requilence avatar sanderpick avatar schwartz10 avatar tcodes0 avatar u5surf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-textile's Issues

SIGPIPE on error: pubsub: already have connection to peer

Related to #77 - this error:

[0;37m19:39:09.689 �[31mERROR �[0;34mipns-repub: �[0mRepublisher failed to republish: failed to find any peer in table �[0;37mcore.go:518�[0m

Seems to follow SIGPIPE signals in iOS, which apparently are related to using an already closed socket.

@andrewxhill reports a very similar issue in Android - bunch of closed socket warnings, right?

Mobile GUI ?

Pretty nice example of ipfs usage.

where is the Mobile GUI ?

I think that you can speed up alot of the code using WASM and protobufs

Room concensus

Trying to wrap my head around the best approach to maintaining our distributed photo albums. @andrewxhill @carsonfarmer one of you mentioned CRDTs, there's some good stuff out there: CRDTs. We've also discussed blockchain consensus techniques.

However, after drawing it out, I think our current solution might just work as is. A room can be thought of as a set of cumulative updates (adds / deletes). The problem we have is just how to get each peer every add and every update, considering very unstable connectivity?

We currently broadcast updates which are simply just the root hash of the added photo set (photo, thumb, metadata, last), where "last" is a file containing the hash of that node's last update's root hash. Our little blockchain of updates. #40 is important: each peer needs to continuously re-publish it's HEAD update, otherwise new comers won't be able to partake until a new update happens. When a peer gets an update, it handles it recursively, making sure it has locally (and has indexed) the contents of the update, and each prev. update by inspecting the last file.

Side note: Right now, every update is implicitly considered an "add", but we could change that via some scheme on the update itself: op:hash, e.g., add:QmY2nposvuCaikdazPNDc4vY4CAxZgioZ9Wf1JuamBmRQQ, del:QmXARYDDjf27tDNGpWYjCW9c8X6i5BrvUkLKKwrRotsuys.

That said, just considering adds for now, this is what happens with 2 peers:

n1
<--0:a
<--a:b
<--b:c

n2
0:a<--
a:b<--
b:c<--

The syntax here is <--last:new for sending and last:new<-- for receiving an update. So far n1 has been making updates and n2 has been receiving them just fine, easy peachy! Ok but now n2 goes for an airplane ride and looses the p2p network. The thing is, both peers are still taking photos:

n1
<--0:a
<--a:b
<--b:c
<--c:d
<--d:e

n2
0:a<--
a:b<--
b:c<--
<--c:d'
<--d':e'

When n2 lands, this happens:

n1
<--0:a
<--a:b
<--b:c
<--c:d
<--d:e
d':e'<--
c:d'<--

n2
0:a<--
a:b<--
b:c<--
<--c:d'
<--d':e'
d:e<--
c:d<--

Each node gets the latest update from the other, then back propagates on each to find its peer's older updates. Each will stop on the common ancestor, i.e., n2 will stop when it hits c:d because it already has c, and n1 will stop when it hits c:d', because it too already has c. Since each update is indexed locally by the time it was initially added to the system, we end up with the same exact set of data. The update order doesn't matter (esp. in the only add case, little more complex with deletes, but I think still in our current wheel house... we just have to reprocess deletes until the target content is available). So, different update chains lead to the same data set, same as there are different (infinite) ways to add to any number. I guess it's obvious in the end... :)

In an normal blockchain, you might throw out the shorter chain when they reconcile, but we don't want to ever throw out photos. Also, I don't think we care about POW / POS since in order to generate a valid update in the room, you need to encrypt / sign the payload with the rooms private key... so we can just say, if you're able to decrypt / verify it, it should be considered good. So, in our case, all peers are trusted (is that bad?)

Making sense? What am I missing? Seems too easy!? Maybe this is actually a simple CRDT protocol.

Techical documentation and project overview

As I mentioned on #41 , it would be very helpful for potential new contributors if there was documentation about the protocols isee by textio.io and technical details available somewhere. Would it hard to put together together?

Expire hashes after first use

This was untested so left it out for now. They overwrite themselves on new request, so in theory this should work no problem.

Sign + verify block updates

At first these will go out over Floodsub, but eventually will be broadcasted via own DHT. The JWT Ed25519 signing method is already in place.

Fix desktop pairing

Currently, the desktop app won't create the correct thread to use for mobile sync... got demolished during the re-arch.

Build and Release

Components that need to be built and releases / deployed:

  • textile CLI (released to GH releases)
  • Mobile.framework and textilego.aar (released to GH releases, CocoaPods, some Android specific package manager?)
  • Textile for Mac (released to GH releases)
  • Central API (container released to Docker Hub, docker-composed stack deployed to Docker Swarm)

Break out photo data from update chain

Related to #84, a file should not have to be re-added / duplicated on IPFS for each thread it's in, meaning that it should be standalone / not part of an update chain.

Refresh token endpoint

Currently we are issuing an access_token and a refresh_token, but there's no way to get a new access_token via the refresh.

Republishing question

I think the default time before pointers expire from the IPFS DHT is about 3 days, unless the node republishes. How are you handling that, because for us at OpenBazaar we found it was too short and created a fork of the network to allow longer periods of time before republishing?

EXC_BAD_ACCESS when violently toggling node start/stop

screen shot 2018-05-17 at 18 02 39

I think if you follow the thread steps back it'll end up on one of the ticker based go routines... basically trying to read off the IPFS node context's Done channel when the node has been set to nil. We could maintain another flag for determining if the IPFS node is stopped on the TextileNode, isStopped or something, and then not have to set IpdsNode to nil.

Comment on photos

This would be a dag structure like the photo directories, something like:

  • a file called parent that contains the root cid of the target photo
  • a file called comment that contains the actual comment text (much like caption file in the photo dag)
  • a file called meta which is much like the photo meta file, containing, created, username, and peer_id fields.
  • a file called last which contains the cid of the last comment OR photo update this peer made in that room, this way we still only need to republish HEAD update for each room

That covers the storage mechanism. We can do an indexing setup just like photos. So, a table in the SQLite db that has columns for the fields we want to index: parent, created, username, peerId.

pubsub: already have connection to peer

ERROR pubsub: already have connection to peer: <peer.ID RmNT6a> asm_amd64.s:2361

These crop up in the cli's stdout if it's been alive for awhile. Don't seem to hurt anything, but doesn't feel good either.

Intercept IPFS error logging

[0;37m19:39:09.689 �[31mERROR �[0;34mipns-repub: �[0mRepublisher failed to republish: failed to find any peer in table �[0;37mcore.go:518�[0m

Seeing stuff like this pop up in the xcode console and in the cli promt. Ideally these would find their way to our log file.

error: could not determine host

The following GoLog error message is cropping up when starting up on android (sim):

E/GoLog: [0;37m20:22:15.880 [31mERROR [0;34m core: [0mmdns error: could not determine host: open /proc/sys/kernel/hostname: permission denied [0;37mbuilder.go:219 [0m

Seems to be referring to P2P host, but shared etc are coming through.

Encrypt sqlite database

This would be a matter of re-enabling / including what OB has done re: encrypting and decrypting the database with a simple password.

Wallet v2

Problems with the current setup

  • Thread members are solely responsible for broadcasting their own state. This will always lead to content loss / undiscoverability when nodes disappears / are sporadically online (which is very often in our case)
  • Thread members receive updates via floodsub, meaning that they need to be online when updates are sent in order to get them

Enhancements / Design Goals

  1. Meets the Zero-configuration networking paradigm
  2. Nodes do not need to trust each other (everything is verifiable and signed)
  3. Nodes are not solely responsible for broadcasting their state, or in other words, the network as a whole has a memory of each node’s state
  4. No central / foreign nodes in the network
  5. A user should be able to backup his / her entire wallet with just a master key mnemonic phrase and a collection of encrypted thread keys
  6. Includes a mechanism(s) for individual to grant limited access to third-party data processors / algorithms to privately and securely read / write user data

Proposal doc here: https://paper.dropbox.com/doc/Textile-Wallet-V2-NoAZLC0jFNOrh4pO9VPI8

Private / shared photo albums or "rooms"

A user should be able to create albums and optionally share them. To do this, we'd need to:

  • create a keypair table which holds additional keys for each room
  • allow client to be subscribed to more than one room at a time

Surely will run into other tasks along the way, but this should get us most of the way there.

Content deduplication

Hi,

Does textile support decoupling file content from its meta data? If it supposed to have social "share" function and let's say people start using it for sharing memes from different sources, more likely it going to be same image data with different EXIF (or some other headers).

Another example: some friend of mine shared a photo with me and I want to share it to some group but I would definitely want to skip location and maybe some other meta from it. There should be more optimal way than just to keep two copies of the same image data on the disk. Moreover imagine, we talk about 2Gb video file.

These guys are working on something similar (storage over ipfs) and they address this issue by dividing all the data into some blobs (which are not one-to-one files, but some more low-level chunks)
https://youtu.be/PlAU_da_U4s?t=671 here they are talking something about the storage and content deduplication

add DoneChan to the bootstrap config

As mentioned on the OB fork of go-ipfs: https://github.com/OpenBazaar/go-ipfs

"core/bootstrap.go add DoneChan to the bootstrap config which is closed when the inital bootstrap finishes. This is in place of blocking for the initial bootstrap."

So, it's an asyn operation instead of blocking. Most of the node's startup time is here, waiting for bootstrap, and it's quite variable. If we enable this too, the start up experience of the app will be much nicer:

  • no need to wait for the node to come fully online to present the UI (thumbs, etc. are available offline)
  • anything in SQLite can be available offline as well

One mnemonic to rule them all

Currently, we have a key pair for each thread. We wouldn't want to have the user write down each one. For recovery, we can just make one master key which is used to encrypt all the others, which are then stored in a the user's wallet

Crash if unsupported file extension is passed to addPhoto

Discovered this by accidentally passing in a JPEG file with an (incorrect).HEIC extension. This error condition should be caught and bubbled up to the caller.

I suppose there should also be a test case to test what happens when an unsupported file extension is passed in.

Use Ed25519 for peer identity Replace peer identity

1.5mins to build the node down to 3sec using Ed25519 keypair for peer identity (this is on mobile btw)

from this:

20:41:32.106 [DoInit] [INFO] initializing textile ipfs node at /var/mobile/Containers/Data/Application/E0F703B1-7CD9-479A-9973-F48AB0294572/Documents
20:41:32.107 [identityConfig] [INFO] generating 4096-bit RSA keypair...
20:42:49.144 [identityConfig] [INFO] new peer identity: QmYbRA51M8XjmmaMeC7bBhsxf8At3822mD4SBRaHBjK54y
20:42:49.588 [CreateAlbum] [INFO] creating a new album: default
20:42:49.588 [CreateAlbum] [INFO] generating 4096-bit Ed25519 keypair for: default
20:42:49.608 [CreateAlbum] [INFO] creating a new album: beta
20:42:49.608 [CreateAlbum] [INFO] regenerating Ed25519 keypair from mnemonic phrase for: beta
20:42:49.626 [Start] [INFO] starting node...
20:42:50.404 [printSwarmAddrs] [INFO] swarm listening on /ip4/10.164.191.248/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip4/127.0.0.1/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip4/169.254.3.196/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:856a:68ca:99:9543:21e0:b455/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:856a:68ca:dd40:e831:565b:34da/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:4112:c69b:8206:e55e/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /ip6/::1/tcp/4001
20:42:50.405 [printSwarmAddrs] [INFO] swarm listening on /p2p-circuit/ipfs/QmYbRA51M8XjmmaMeC7bBhsxf8At3822mD4SBRaHBjK54y
20:42:50.407 [printSwarmAddrs] [INFO] swarm announcing /ip4/10.164.191.248/tcp/4001
20:42:50.407 [printSwarmAddrs] [INFO] swarm announcing /ip4/127.0.0.1/tcp/4001
20:42:50.407 [printSwarmAddrs] [INFO] swarm announcing /ip4/169.254.3.196/tcp/4001
20:42:50.407 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:856a:68ca:99:9543:21e0:b455/tcp/4001
20:42:50.407 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:856a:68ca:dd40:e831:565b:34da/tcp/4001
20:42:50.408 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.408 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.408 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
20:42:50.408 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:4112:c69b:8206:e55e/tcp/4001
20:42:50.408 [printSwarmAddrs] [INFO] swarm announcing /ip6/::1/tcp/4001
20:42:54.562 [Generate] [INFO] saved a new cert.pem to: /var/mobile/Containers/Data/Application/E0F703B1-7CD9-479A-9973-F48AB0294572/Documents/cert.pem
20:42:54.563 [Generate] [INFO] saved a new key.pem to: /var/mobile/Containers/Data/Application/E0F703B1-7CD9-479A-9973-F48AB0294572/Documents/key.pem
20:42:54.563 [startGateway] [INFO] decrypting gateway (readonly) server listening on /ip4/127.0.0.1/tcp/9080
20:42:54.563 [Start] [INFO] mobile node is ready

to this:

21:34:12.428 [DoInit] [INFO] initializing textile ipfs node at /var/mobile/Containers/Data/Application/A857D814-CDDE-47E1-85FD-44BCE6248399/Documents
21:34:12.429 [identityConfig] [INFO] generating 4096-bit RSA keypair...
21:34:12.430 [identityConfig] [INFO] new peer identity: QmdaZD7wQQeGGfDE29XrTjnUAqhJ2Z4ZPqQYduN75wNpBJ
21:34:12.588 [CreateAlbum] [INFO] creating a new album: default
21:34:12.588 [CreateAlbum] [INFO] generating 4096-bit Ed25519 keypair for: default
21:34:12.607 [CreateAlbum] [INFO] creating a new album: beta
21:34:12.607 [CreateAlbum] [INFO] regenerating Ed25519 keypair from mnemonic phrase for: beta
21:34:12.625 [Start] [INFO] starting node...
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip4/10.164.191.248/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip4/127.0.0.1/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip4/169.254.3.196/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:856a:68ca:99:9543:21e0:b455/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:856a:68ca:dd40:e831:565b:34da/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/2600:380:85ae:4cfe:4112:c69b:8206:e55e/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /ip6/::1/tcp/4001
21:34:14.605 [printSwarmAddrs] [INFO] swarm listening on /p2p-circuit/ipfs/QmdaZD7wQQeGGfDE29XrTjnUAqhJ2Z4ZPqQYduN75wNpBJ
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip4/10.164.191.248/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip4/127.0.0.1/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip4/169.254.3.196/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:856a:68ca:99:9543:21e0:b455/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:856a:68ca:dd40:e831:565b:34da/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.606 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.607 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:107b:5442:2b33:a0c9/tcp/4001
21:34:14.607 [printSwarmAddrs] [INFO] swarm announcing /ip6/2600:380:85ae:4cfe:4112:c69b:8206:e55e/tcp/4001
21:34:14.607 [printSwarmAddrs] [INFO] swarm announcing /ip6/::1/tcp/4001
21:34:15.759 [Generate] [INFO] saved a new cert.pem to: /var/mobile/Containers/Data/Application/A857D814-CDDE-47E1-85FD-44BCE6248399/Documents/cert.pem
21:34:15.760 [Generate] [INFO] saved a new key.pem to: /var/mobile/Containers/Data/Application/A857D814-CDDE-47E1-85FD-44BCE6248399/Documents/key.pem
21:34:15.760 [startGateway] [INFO] decrypting gateway (readonly) server listening on /ip4/127.0.0.1/tcp/9080
21:34:15.760 [Start] [INFO] mobile node is ready

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.