Giter Club home page Giter Club logo

gevulot's Introduction

Gevulot

Gevulot is a permissionless and programmable layer one blockchain for deploying zero-knowledge provers and verifiers as on-chain programs. It allows users to deploy and use entire proof systems on-chain, with minimal computational overhead as compared to single prover architectures. The vision of Gevulot is to make the creation and operation of zk-based systems, such as validity rollups, as easy as deploying smart contracts.

For a more in-depth look at the network design see our docs.

The current status of the project is pre-alpha.

Gevulot Node

Gevulot node is written in Rust and packaged into a container. It uses QEMU-KVM as its hypervisor to run unikernel programs.

Building container

To build Gevulot node container image:

podman build -t gevulot-node .

Running the node

In order to run the node, refer installation guide.

Development

For development you need following dependencies (package names for Fedora):

  • openssl-devel
  • protobuf
  • protobuf-c
  • protobuf-compiler
  • protobuf-devel

Database

Local postgres container under systemd

Local development postgres can be run e.g. as a user's quadlet systemd unit:

~/.config/containers/systemd/gevulot-postgres.container

[Install]
WantedBy=default.target

[Container]
ContainerName=gevulot-postgres

Image=docker.io/library/postgres:16-alpine

Environment=POSTGRES_USER=gevulot
Environment=POSTGRES_PASSWORD=gevulot
Environment=POSTGRES_DB=gevulot

Network=host
ExposeHostPort=5432
Initialization

sqlx-cli can be run from crates/node directory as follows:

  • Create database:
    • cargo sqlx database create --database-url postgres://gevulot:gevulot@localhost/gevulot
  • Run DB migrations:
    • cargo sqlx migrate run --database-url postgres://gevulot:gevulot@localhost/gevulot
Refresh SQLX cache
  • cargo sqlx prepare --database-url postgres://gevulot:gevulot@localhost/gevulot

License

This library is licensed under either of the following licenses, at your discretion.

Apache License Version 2.0

MIT License

Any contribution that you submit to this library shall be dual licensed as above (as defined in the Apache v2 License), without any additional terms or conditions.

gevulot's People

Contributors

0x5459 avatar bhechinger avatar dependabot[bot] avatar ghostant-1017 avatar kylegranger avatar musitdev avatar teempai avatar tuommaki avatar vlopes11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gevulot's Issues

Workload scheduling VRF

Workload scheduling VRF must take existing workload into account.

Investigate incorporation of ordered work queue by task size. Then try to search for new random nodes which have least enqueued work.

Try to avoid:

  • Scheduling work to nodes with tasks.
  • Scheduling work to nodes with heavy / long running tasks (tx gas fee / deadline) when no free nodes are available.

Lines overflowing from the screen.(Clean Code Improvements)

The crate mentioned in the issue: CLI

There are numerous lines overflowing from the screen, which complicates reading the code. The length of the lines should be kept between 95-110 characters to enhance the readability of the code.
Also, even though they do not overflow from the screen, there are lines that could be restructured to make the code more readable.

If the project maintainers decide that this adjustment is necessary, I am ready to do it.

Example lines:
https://github.com/gevulotnetwork/gevulot/blob/main/crates/cli/src/main.rs#L147
https://github.com/gevulotnetwork/gevulot/blob/main/crates/cli/src/server.rs#L173
https://github.com/gevulotnetwork/gevulot/blob/main/crates/cli/src/server.rs#L175
https://github.com/gevulotnetwork/gevulot/blob/main/crates/cli/src/main.rs#L91

Have a nice day.

e2e-test reports InvalidRequest

The test program gevulot/crates/tests/e2e-tests/src/main.rs reports "thread 'main' panicked at crates/tests/e2e-tests/src/main.rs:94:10:
send_transaction: InvalidRequest("failed to persist transaction")" .

$ RUST_LOG=debug ./gevulot-e2e-tests --prover-img ~/.ops/images/prover --verifier-img ~/.ops/images/verifier --key-file my-local-key.pki --listen-addr 127.0.0.1:8080 --json-rpc-url http://api.devnet.gevulot.com:9944
[2024-06-18T09:54:39Z INFO gevulot_e2e_tests] e2e is running....
Listening on http://127.0.0.1:8080
[2024-06-18T09:54:39Z INFO gevulot_e2e_tests] before deployment:
[2024-06-18T09:54:39Z DEBUG gevulot_node::types::transaction] Transaction::new tx:4e5d71428b7eeaea12e293cd968bf2a21027e58fbe5e333a0cf3712cdbb76178 payload:(Deploy)
[2024-06-18T09:54:39Z DEBUG hyper::client::connect::dns] resolving host="api.devnet.gevulot.com"
[2024-06-18T09:54:39Z DEBUG hyper::client::connect::http] connecting to 34.88.251.176:9944
[2024-06-18T09:54:40Z DEBUG hyper::client::connect::http] connected to 34.88.251.176:9944
[2024-06-18T09:54:40Z DEBUG hyper::proto::h1::io] flushed 1535 bytes
[2024-06-18T09:54:40Z DEBUG hyper::proto::h1::io] parsed 3 headers
[2024-06-18T09:54:40Z DEBUG hyper::proto::h1::conn] incoming body is content-length (92 bytes)
[2024-06-18T09:54:40Z DEBUG hyper::proto::h1::conn] incoming body completed
[2024-06-18T09:54:40Z DEBUG hyper::client::pool] pooling idle connection for ("http", api.devnet.gevulot.com:9944)
thread 'main' panicked at crates/tests/e2e-tests/src/main.rs:100:10:
send_transaction: InvalidRequest("failed to persist transaction")

Add an intermediary directory to tx associated file path

Currently, the tx associated files are stored at the root of the data directory. Like image or logs, they should be saved in their own subdirectory.
Append the directory name txfiles to all saved tx files so that all files are saved under `txfiles

Improve transaction construction API

Currently Transaction and it's sub-components are slightly cumbersome to construct and use. Hashing / signing needs to be done separately.

Consider specific new() constructor or some kind of a builder pattern implementation.

Implement the PartialEq trait of Transaction

PartialEq of the Transaction struct should be implemented by hand using the Hash and/or signature.
All other fields are not needed for the equality.
Even the propagated field can generate some strange behavior because 2 tx can be equals with this field different.
Tx should only be identifier by its Hash and the Hash and signature field should be private.

We should add a new method to create the Tx with its Hash and signature so It should never be modified. Currently, a Tx can be created and not signed.

The payload shouldn't implement PartialEq. It's not useful and even can lead to some issue when you need to add a payload when equality doesn't really exist.

Implement diff for whitelisted keys in `WhitelistSyncer`

In order to be able to remove keys in a coordinated fashion, add diff support to networking::WhitelistSyncer which compares keys in the DB with the keys in the latest file and removes those that don't exist.

Practically this can be done in memory (the number of keys in the end is relatively low), but one possibility is also to create a temporary table for new keys in DB and then remove the rows that don't exist there and insert the ones that do.

Error reporting from program running in VM

Current gRPC service & shim interface lacks support for proper error reporting from program running in VM.

While the error won't be visible in the JSON-RPC API nor in the blockchain, ultimately, it should be still logged by Gevulot node for debugging purposes.

Add reasonable error type with error code (int) and message (string) to gRPC service and corresponding shim APIs.

NOTE: Update GitBook's Program Development page with corresponding changes.

error: failed to run custom build command for windows_x86_64_msvc v0.52.4

error: failed to run custom build command for windows_x86_64_msvc v0.52.4

Caused by:
could not execute process C:\Users\Dell\AppData\Local\Temp\cargo-installWfEz4T\release\build\windows_x86_64_msvc-ad1ea0008856dd84\build-script-build (never executed)

Caused by:
拒绝访问。 (os error 5)
warning: build failed, waiting for other jobs to finish...
error: failed to compile gevulot-cli v0.1.0 (https://github.com/gevulotnetwork/gevulot.git#0119c70a), intermediate artifacts can be found at C:\Users\Dell\AppData\Local\Temp\cargo-installWfEz4T.
To reuse those artifacts with a future compilation, set the environment variable CARGO_TARGET_DIR to that path.
PS F:\gevulot>

Local node reports error when running GPU task

  1. deploy the local Gevulot node according : https://blog.gevulot.com/p/run-a-local-gevulot-prover-node
  2. local node has 4 GPUs:
    ubuntu@10-60-19-80:~/gevulot/gevulot$ nvidia-smi
    Thu Jul 25 22:44:40 2024
    +---------------------------------------------------------------------------------------+
    | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
    |-----------------------------------------+----------------------+----------------------+
    | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
    | | | MIG M. |
    |=========================================+======================+======================|
    | 0 NVIDIA GeForce RTX 4090 Off | 00000000:00:03.0 Off | Off |
    | 45% 39C P0 53W / 450W | 2MiB / 24564MiB | 0% Default |
    | | | N/A |
    +-----------------------------------------+----------------------+----------------------+
    | 1 NVIDIA GeForce RTX 4090 Off | 00000000:00:04.0 Off | Off |
    | 45% 39C P0 60W / 450W | 2MiB / 24564MiB | 0% Default |
    | | | N/A |
    +-----------------------------------------+----------------------+----------------------+
    | 2 NVIDIA GeForce RTX 4090 Off | 00000000:00:05.0 Off | Off |
    | 44% 40C P0 58W / 450W | 2MiB / 24564MiB | 0% Default |
    | | | N/A |
    +-----------------------------------------+----------------------+----------------------+
    | 3 NVIDIA GeForce RTX 4090 Off | 00000000:00:06.0 Off | Off |
    | 41% 39C P0 66W / 450W | 2MiB / 24564MiB | 2% Default |
    | | | N/A |
    +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+

  1. the local Gevulot Node works well for no GPU tasks.
  2. the Gevulot node version is : gevulot-node 0.1.0
  3. when executing GPU task, it reports the error:
    )
    2024-07-25T14:23:05.741923Z INFO send_transaction{params=Params(Some("[{"author":"BHtMVMKRPlA2esC0vJpHHsGjfy9eKmB2NxYhgKysC+lV2dycATIWOUqEDTuDsEv1HACDSne+Mf0G3frRNQp9MiI=","hash":[203,90,40,63,176,137,59,21,206,56,119,209,156,195,153,80,220,39,234,81,114,116,191,129,62,39,60,59,195,83,79,94],"payload":{"Run":{"workflow":{"steps":[{"program":[232,127,20,19,118,247,9,195,83,52,240,149,138,125,206,178,109,159,238,187,106,74,147,112,24,81,194,216,189,217,197,82],"args":["--input_file","/workspace/multiplier.input.json","--circuit_file_bls12","/workspace/mycircuit_bls12381.r1cs","--wasm_file_bls12","/workspace/mycircuit_bls12381_wsm.wasm"],"inputs":[{"Input":{"file_name":"/workspace/multiplier.input.json","file_url":"http://127.0.0.1:18080/multiplier.input.json","checksum":"aa6bffe1b40f122d8d1aa465bf00113f657ddd199abd8d2ae8fd9c02082883f4"}},{"Input":{"file_name":"/workspace/mycircuit_bls12381_wsm.wasm","file_url":"http://127.0.0.1:18080/mycircuit_bls12381_wsm.wasm","checksum":"43e935b748fe2dff802b03794915dccbbb4c5720642d94c89ec13646cd769779"}},{"Input":{"file_name":"/workspace/mycircuit_bls12381.r1cs","file_url":"http://127.0.0.1:18080/mycircuit_bls12381.r1cs","checksum":"f796c293efdd254011b6d0f8b89007f081491d6bde759341b616af430b6d23b9"}}]},{"program":[34,100,81,84,47,5,13,220,30,62,226,27,80,223,211,39,42,130,60,56,185,118,171,91,145,113,189,166,129,72,84,43],"args":["--circom_file","/workspace/lr_chunk_0.circom","--proof_file","/workspace/lr_chunk_0/lr_proof.bin"],"inputs":[{"Output":{"source_program":[232,127,20,19,118,247,9,195,83,52,240,149,138,125,206,178,109,159,238,187,106,74,147,112,24,81,194,216,189,217,197,82],"file_name":"/workspace/debug.log"}}]}]}}},"nonce":0,"signature":{"r":[542089984,2073149236,2642676425,3370119945,3881333241,2052414116,2810163304,4041040831],"s":[839624062,2233849498,2378880005,495096854,2360132086,3474276959,1436625617,621072822]},"state":null}]")) ctx=RPC Context}: gevulot::rpc_server: close time.busy=49.6µs time.idle=23.6ms
    2024-07-25T14:23:05.742135Z INFO get_transaction{params=Params(Some("[[203,90,40,63,176,137,59,21,206,56,119,209,156,195,153,80,220,39,234,81,114,116,191,129,62,39,60,59,195,83,79,94]]")) ctx=RPC Context}: gevulot::rpc_server: JSON-RPC: get_transaction()
    2024-07-25T14:23:05.743134Z INFO get_transaction{params=Params(Some("[[203,90,40,63,176,137,59,21,206,56,119,209,156,195,153,80,220,39,234,81,114,116,191,129,62,39,60,59,195,83,79,94]]")) ctx=RPC Context}: gevulot::rpc_server: close time.busy=195µs time.idle=806µs
    2024-07-25T14:23:07.144909Z ERROR gevulot::vmm::qemu: tx: 0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a - Failed to get QEMU started. Giving up.
    2024-07-25T14:23:07.144980Z WARN gevulot::scheduler: tx 0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a - failed to start program f75b9cb3f12879738972ad001c4817214c36e8061d2b28a64f6c953f4904a35f: Failed to start QEMU
    2024-07-25T14:23:07.145406Z INFO gevulot::vmm::qemu: Tx:0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a Program:f75b9cb3f12879738972ad001c4817214c36e8061d2b28a64f6c953f4904a35f starting QEMU. args:
    CommandArgs {
    inner: [
    "-machine",
    "q35",
    "-device",
    "pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3",
    "-device",
    "pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1",
    "-device",
    "pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2",
    "-device",
    "virtio-scsi-pci,bus=pci.2,addr=0x0,id=scsi0",
    "-device",
    "scsi-hd,bus=scsi0.0,drive=hd0",
    "-vga",
    "none",
    "-smp",
    "32",
    "-device",
    "isa-debug-exit",
    "-m",
    "305536M",
    "-device",
    "virtio-rng-pci",
    "-machine",
    "accel=kvm:tcg",
    "-cpu",
    "max",
    "-drive",
    "file=./data/node/images/f75b9cb3f12879738972ad001c4817214c36e8061d2b28a64f6c953f4904a35f/prover-gpu,format=raw,if=none,id=hd0,readonly=on",
    "-display",
    "none",
    "-serial",
    "stdio",
    "-virtfs",
    "local,path=./data/node/vmfiles/0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a/workspace,mount_tag=0,security_model=none,multidevs=remap,id=hd0",
    "-device",
    "vhost-vsock-pci,guest-cid=37",
    "-qmp",
    "tcp:localhost:1061,server",
    "-device",
    "vfio-pci,rombar=0,host=1",
    ],
    }

2024-07-25T14:23:09.524585Z ERROR gevulot::watchdog: Scheduler Health signal Timeout, no health signal available.
2024-07-25T14:23:12.436572Z ERROR gevulot::vmm::qemu: tx: 0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a - Failed to get QEMU started. Giving up.
2024-07-25T14:23:12.436663Z WARN gevulot::scheduler: tx 0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a - failed to start program f75b9cb3f12879738972ad001c4817214c36e8061d2b28a64f6c953f4904a35f: Failed to start QEMU
2024-07-25T14:23:12.437134Z INFO gevulot::vmm::qemu: Tx:0afdefa3cb7f2d6eb27126fd62fa9d49170302a4108d2b0be40857669069625a Program:f75b9cb3f12879738972ad001c4817214c36e8061d2b28a64f6c953f4904a35f starting QEMU. args:
CommandArgs {
inner: [
"-machine",
"q35",
"-device",

Save Tx and associated File in the Db is not atomic.

I see that if the add_transaction method of postgres.rs is call 2 times with the same Tx, the Tx is not saved another time but the associated files yes (for proof and verify tx). You end with a Tx that has 2 files.
It's because the insert Tx do nothing :

        sqlx::query(
                    "INSERT INTO proof ( tx, parent, prover, proof ) VALUES ( $1, $2, $3, $4 ) ON CONFLICT (tx) DO NOTHING",
                )

and after the file saving always succeed:

let mut query_builder = sqlx::QueryBuilder::new(
                        "INSERT INTO txfile ( tx_id, name, url, checksum )",
                    );

To solve, we can add a unique constraint on the file checksum and add a do nothing to the insert.

Or execute the file insert only if the proof insert if done.

Devnet Gevulot cli tool to communicate with the node.

Develop a tool to help to deploy images and proof / verification calculus.
Tool usage specifications:

cmd line tool to communicate with Gevulot node.

name: gevulot-cli

Usage: gevulot-cli --jsonurl <URL> --keyfile <KEY FILE PATH> <COMMAND> <CMD ARGS>

needed args:
-j, --jsonurl: URL of the RPC access of the Gevulot node. Default value = "http://localhost:9944"
-k, --keyfile: file containing the private key of the user to sign the Tx. Default value = "localkey.pki"

Cmd:

deploy : deploy a program. 

Usage: gevulot-cli --jsonurl <URL> --keyfile <KEY FILE PATH> deploy --name <PROVER NAME> --prover <PROVER FILE or HASH> --verifier <VERIFIER FILE or HASH>  [OPTIONS] --proverimgurl <PROVERURL> --verifierimgurl <VEIFIERURL>
args:
--name: name of the prover
--prover: file pathcontaining the img of the prover to deploy or the hash  of the prover img (--proverimgurl is mandorty in this case). If the  file is not found the parameters is used as an hash.
--verifier: file containing the img of the verifier to deployor the hash  of the verifier img (--verifierimgurl is mandorty in this case). If the  file is not found the parameters is used as an hash.

Optional args:
--proverimgurl: url to get the prover img. If provided the prover will use this url to get the prover img. If not the cli tool start a local HTTP server to server the file to the node.
--verifierimgurl: url to get the verifier img. If provided the verifier will use this url to get the verifier img. If not the cli tool start a local HTTP server to server the file to the node.

Output: 
 Success: print prover and verifier hash.
 Fail: print the error.


exec: Execute a set of task in the order one after the other.

Usage: gevulot-cli --jsonurl <URL> --keyfile <KEY FILE PATH> exec --task <JSON DATA  OF THE TASK> --task  <JSON DATA  OF THE 2nd TASK> ...

Json format of the task data: 
{
	program: "Program Hash",
	cmd_args: [ {name: "args name", value:"args value"}, ...],
	inputs: vec![],
}

Output: 
 Success: result of execution (TBD).
 Fail: print the error.

Implement node sync

Introduction

Currently, each Gevulot node runs in isolation and if there is downtime for any given node, it will miss out the transactions distributed meanwhile. Due to deterministic and cross-verifying (and kind HA) nature of the overall devnet cluster, this is ok for the devnet so far.

Recently, however, we learned from a use case where single proving workload would be too resource intensive to execute on all [available] nodes as the design is now and we need to incorporate a VRF to schedule Run transactions only to individual nodes. This will then require a node syncing functionality so that transactions won't get lost from the devnet overall and that there is possibility to catch up from where the node was left, when there is maintenance downtime for the node.

Objective

Implement node syncing so that when the node starts from scratch or when the node has been shutdown for some period of time, that it will sync itself from another peer.

After syncing, the new node must have all transactions in the database.

Transactions that have been received via syncing mechanism, must not be re-executed. These transactions must also have original timestamp retained. Missing deployments / programs must be downloaded.

Syncing should take place before the scheduler is started.

In all cases, the nodes in the cloud should always have all data at any given point in time.

Possible ideas for implementation

Since devnet does not have any kind of consensus protocol or other distributed system's ordering mechanism employed, there is no way of putting all transactions into absolute order on distinct nodes.

However, each transaction has node specific timestamp and a hash that is globally consistent. With these properties, there should be a way to implement the syncing in relatively efficient and reliable way.

Transactions can be aggregated into groups by some time period and then within the group they can be sorted by the transaction hash. When overlapping the time periods between nodes, the nodes should be able to find common begin and end for the aggregation groups. To make this grouping deterministic in distributed setting, we could borrow a "chunking" mechanism from the FastCDC algorithm and computing a deterministic check point from a stream of concatenated transaction hashes.

These groups can be then hashed and marked as synced.

When node starts, it can submit a list of synced groups (or maybe last 50 groups?) and that way find the optimal starting point for the syncing.

When the starting point has been found out, the node that needs missing data, can then request all transactions (serialized, containing timestamp and execution status) for the next group and persist. Iterating this way up until the present state. P2P network should be running all the time, receiving transactions into the scheduling queue (but scheduler must not run yet). If the syncing method receives a transaction with executed: true, while the same transaction is still sitting in the scheduling queue, it should be removed from the queue.

One should also consider situation where two nodes have forked for some reason and have diverged. In this case, the nodes should find the last common checkpoint (the group of transactions) and from there onward, proceed by exchanging lists of transaction hashes for the following groups and sync each other's missing transactions, effectively "zipping up" themselves up to the present state.

In the present architecture, it is most natural to incorporate functions related to this, into P2P implementation where individuals can perform unicast RPC messaging.

Activate health check for all node type

In the current implementation, the health check HTTP server is only created for the executing node. Activate it for non executing node. It allows detecting if the node is started and running.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.