Giter Club home page Giter Club logo

lnpbps's People

Contributors

22388o avatar afilini avatar awesome-doge avatar chokoboko avatar cristiantroy avatar cryptoquick avatar cymqqqq avatar dr-orlovsky avatar fedsten avatar inaltoasinistra avatar sosthene00 avatar thabokani avatar ukolovaolga avatar whfuyn avatar xiaolou86 avatar yojoe avatar zoedberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lnpbps's Issues

Procol doc nits and framing issues for LNPBP-2

The following are some few nits and questions I came up with while reviewing LNPBP-2.

Nits:

With the following updates [11] LN will likely will change a single P2WPKH (named to_remote)...
to ->
With the following updates [11] LN will likely change a single P2WPKH (named to_remote)...

an arbitrary public key (even with an unknown corresponding private key), if OP_RETURN scriptPubkey is used;
to ->
an arbitrary public key (even with an unknown corresponding private key), if OP_RETURN type scriptPubkey is used;

Computes en elliptic curve point and its corresponding public key S as sum of elliptic curve points corresponding
to the original public keys.
to ->
Computes a public key S as the sum of original public keys. (Rationale: as elliptic curve points are the public key, repeating them twice seems redundant).

on the second step of the protocol (according to LNPBP-1 specification [12]) HMAC-SHA256 is provided with the value
of S (instead of P), i.e. hence we commit to the sum of all the original public keys;

Maybe adding the Hmac equation here HMAC_SHA256(SHA256("LNPBP2") || SHA256(<protocol-specific-tag>) || msg, S) can be more expressive.

on the forth step of the protocol the tweaking-factor based point F is added to the P (a single original public
key), resulting in key T
to ->
on the fourth step of the protocol the tweaking-factor based point F is added to the P (a single original public
key), resulting in key T, i.e. T = P + G * HMAC_SHA256(SHA256("LNPBP2") || SHA256(<protocol-specific-tag>) || msg, S)

Constructs and stores an extra-transaction proof (ETP), which structure depends on the generated scriptPubkey
type:
to ->
Constructs and stores an extra-transaction proof (ETP), which is a structure that depends on the generated scriptPubkey
type and consists of:
a)....
b)...

There is also a minor markup formatting issue here, making the resulting paragraph structure bit confusing.

The revel protocol is usually run between the committing and verifying parties; however it may be used by the
committing party to publicaly revel the proofs of the commitment. These proofs include:
to ->
The revel protocol is usually run between the committing and verifying parties; however, it may be used by the
committing party to publicly revel the proofs of the commitment. These proofs include:

The proposed cryptographic commitment scheme is fully compatible with any other LNPBP1-based commitments for the
case of P2PK, P2PKH and P2WPH transaction outputs, since they always contain only a single public key and the original
to ->
The proposed cryptographic commitment scheme is fully compatible with any other LNPBP1-based commitments for the
case of P2PK, P2PKH and P2WPKH transaction outputs, since they always contain only a single public key and the original

The author does not aware of any P2(W)SH or non-OP_RETURN P2S cryptographic commitment schemes existing before this
to ->
The author is not aware of any P2(W)SH or non-OP_RETURN P2S cryptographic commitment schemes existing before this

Reference implementation

https://github.com/LNP-BP/rust-lnpbp/blob/master/src/cmt/txout.rs

It would be better to give https://github.com/LNP-BP/rust-lnpbp/blob/master/src/bp/dbc/lockscript.rs & https://github.com/LNP-BP/rust-lnpbp/blob/master/src/bp/dbc/keyset.rs as the reference implementation link as this is where the majority of the LNPBP-02 logics are lying.

Questions:

Constructs necessary scripts and generates scriptPubkey of the required type. If OP_RETURN scriptPubkey format is
used, it MUST be serialized according to the following rules:

  • only a single OP_RETURN code MUST be present in the scriptPubkey and it MUST be the first byte of it;
  • it must be followed by 32-byte push of the public key value P from the step 2 of the algorithm, serialized
    according to from [15]; if the resulting public key containing the commitment is non-square, a new public key MUST
    be picked and procedure from step 4 MUST BE repeated once more.

It would be nice to explain the rationale of requiring squared public key specifically for OP_RETURN type outputs as it's not enforced in any other part of the protocol.
In the implementation here https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/scriptpubkey.rs#L239
this behavior is not reproduced as the pubkey serialization output is in compressed form.

TODO: Schnorr compatibility

If I understand correctly in order to be schnorr compatible we need to enforce all the pubkeys to be squared. do we need any other consideration also?

Code Review

The composition assignment for P2PK and P2PKH are reversed here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/scriptpubkey.rs#L111-L112

if the number of original public keys defined at step 1 exceeds one, LNPBP2 tag MUST BE used instead of LNPBP1

This behaviour is not reproduced in the code here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/keyset.rs#L122
The tag is kept as LNPBP-1.

So these are some minor issues I observed in the specification and the implementation and they provide some scope of improvement. If consensus on the suggested changes is achieved I can start making small PRs to fix them.

LNPBP-2: use miniscript determinism to list public keys in Bitcoin script

There are two options to enumerate all public keys within Bitcoin script (redeem script or custom script within scriptPubkey):

  • emulate bitcoin stack and take the values from it on certain commands signifying public key presence (like OP_CHECKSIG[VALIDATE])
  • convert the script into miniscript and take the public key values out of it

It is proposed to stick to the latest option since it:

  • will be maintained outside of LNPBP, so all new softforks + possible implementation bugs will be automatically merged
  • it will require to write less code

[WIP] API design considerations

API design considerations

Background

There multiple different types of API that can be used by the software stack
related to LNP/BP. Here we analyze criteria to choose the proper API
technologies and serialization standards for different cases.

In general, software might require API for:

  • Interprocess communications (IPC), including those between daemons, their
    instances, or used in microservice architectures on the same machine or in
    network-connected docker containers behind DMZ.
  • Non-web-based client-server interactions crossing DMZ, following either
    request/reply or subscribe/publish patterns.
  • REST Web-based APIs for requesting resource-based data (i.e. with clear
    data hierarchical structure)
  • Real-time or transactional web-based APIs, including requests for remote
    procedures (RPC) over web or bidirectional/realtime (RT) client-server
    communications using Websockets.
API type Sample use cases Typical scenarios
IPC c-lightning IPC Microservice IPC for servers and daemons
Non-web client-server electrum protocol; bitcoind RPC High-throughput or non-custodial solutions
Web-based REST esplora Blockchain explorers
Web-based RPC/RT many web apps Wallets

Today, many different API description languages, serialization formats and
transport layers exist that may be used in the mentioned scenarios. However, in
most of the cases the choice of the particular formats are nearly arbitrary or
related to historical reasons. Here I'd like to systematize criteria for API
technic selection in LNP/BP for future apps that may allow to avoid many bad
practices of the past.

Overview

API components

The classical API consists of three main components:

  1. Data serialization format, allowing all parties involved in communication
    read/write data with the same deterministic result. Usually classified to be
    binary or textual basing on human readibility/ASCII character set. The most
    common formats are:
    • ASCII-based/text/human readable:
      • XML (XML Schema)
      • JSON (JSON Schema)
      • YAML (YAML Schema)
    • Binary serialisation
      • BSON
      • Protocol buffers
      • ASN.1
      • RPC framework-specific (like used in ZMQ, Apache Thrift)
      • Custom/vendor-specific (Bitcoin core and others)
        Many serialization formats have a schema- (mostly for human-readable) or
        DSL-based definition of possible values used by a particular application/API,
        which may be used for an automatic code generation and/or data packet
        validation.
  2. API per se, specifying available resources or procedures which may be invoked
    via IPC/network communication. Thus, API may fall into two classes:
    • RPC (remote procedure call), where each API call consists of the invoked
      procedure name and a list of it's arguments - very much alike procedural
      programming languages. Server-side components with RPC paradigm usually
      have their state.
    • REST (representational state transfer), used to call ACID-based methods
      on a well-defined hierarchical graph of resources
    • Custom/non-standard approaches, like GraphQL
  3. Transport-layer protocol, defining the means of transporting information
    about API calls and associated data over the underlying network topology:
    • POSIX sockets
    • POSIX IPC
    • TCP/IP
    • UDP/IP
    • HTTP (pure or over TLS/SSL)
    • Websockets (pure or over TLS/SSL)
    • Tor/SOCKS

Many existing API automation frameworks (see below) cover more than a single
API component.

API Protocols and Frameworks

Here we provide information only about modern and most recently used frameworks:

Framework name / protocol family Layers Transport protocol requirements Best suited/designed for
Apache Thrift 1 (many), 2 (RPC), 3 (custom) HTTP(s), TCP Microservice architectures (only Req/Rep however)
GraphQL 1 (JSON), 2 (custom) HTTP(s) Complex data-centric web applications with non-hierarchical data graphs
gRPC/Protobuf 1 (binary/custom), 2 (RPC) HTTP(s), TCP, ? Microservice architectures (only Req/Rep however)
JSON-RPC 1 (JSON), 2 (RPC) HTTP Legacy/insecure
OpenAPI 1 (JSON), 2 (REST) HTTP(s) REST web applications
SOAP/WSDL 1 (XML), 2 (RPC) HTTP Enterprise system bus-centered enterprise architectures
WAMP 1 (JSON or other), 2 (RPC) Websockets, TCP, POSIX Real-time web apps, socket-based apps
XML-RPC 1 (XML), 2 (RPC) HTTP Legacy/insecure
ZeroMQ 1 (binary/custom), 2 (RPC) POSIX sockets, POSIX IPC, TCP, USD High throughput, Pub/Subs, IPCs, Microservice architectures

IPC for Microservices

The requirements for this are:

  • Compact binary data serialization format
  • Support for custom serialization (i.e. consensus-based for Bitcoin-related
    data structures)
  • No third-party code generation tools (safety for consensus-critical data)
  • High throughput transport
  • Support for all types of IPC sockets including Tor
  • Ability to use encryption at transport layer
  • Support of Request-Reply (RPC) and Publish-Subscribe patterns
  • Well suited for serialization of hashes, public keys etc.

Much less important for the protocols:

  • Web compatibility
  • Human readability

ZeroMQ seems to be a tool of choice for the transport layer, which have to be
combined with custom RPC API DSL and serialization protocol.

Client-server (non-web)

ZeroMQ seems to be the tool of the choice here as well

Web-based REST

OpenAPI seems to be the tool of the choice.

Web-based RPC

WAMP seems to be the tool of the choice for apps that require live updates
(Websockets).

Another alternative to consider is GraphQL, however it should be noted that id
usually has a poor performance and is not suited for Websocket apps.

End notes

Protocol buffers or Apache Thrift serialization can't be used in all of the cases due to:

  • A lot of code generation
  • No support for hashes or public keys

Original: https://github.com/dr-orlovsky/notes/blob/master/api_design.md

Restructuring asset-related RGB schemata

As was discussed during the dev call on 15th Jul 2020, use cases for different forms of tokens (stable coins, shares) will benefit from having separated schema. Here I propose to define the following set of schema for all asset-related cases:

  1. Single-issuance fungible digital rights
    • non-inflatible
    • no burn procedure
  2. Securities
    • inflatible
    • burn procedure
    • no reissuance after burning
  3. Coins
    • inflatible
    • burn procedure
    • reissuance after the burn
  4. Collectibles
    • unique
    • multiple issues in series
    • burnable?
    • rich metadata

All of these schemas will be proxied by a single asset RGB contract daemon (inside RGB node) with a single standard API for reducing the complexity of integration from a wallet perspective.

RGB protocol versioning / features

(updated)

I think the best way for RGB versioning is to have LN-like set of features, with even/odd differentiation, and commit to it in a Schema.

This version will define not the Schema version, but version of RGB protocol as a whole, i.e. how the schema data and all smart contract data issued under this schema will be serialized and interpreted. This will allow RGB updates such as addition of Simplicity language etc.

With a client-side validation we prohibit to change anything in terms how protocol works (commitment rules, validation rules etc) once the contract is created. But RGB a sa whole is a set of absolutely unrelated contracts, so I understood that while a single contract can’t be “upgraded”, nothing prevents in a client-validated paradigm to have protocol versioning/features for different contracts, so new contracts can be created under RGBv2 for instance (providinb mimblewimble aggregation/whatever)

Defining / closing seals on dust UTXO

A dust UTXO is uneconomical to spend, meaning that the value of sats it contains is below the mining fee in sat/vbyte that is required to spend it again. In order to spend the coin, it has to be consolidated with at least one other coin, that pays for the left-over transaction fee deficit.

IIRC, Bitcoin Core / Knots nodes remove such a coin from their UTXO set in memory, as it is not expected [but still possible] that it will be spent.

However, RGB changes this dynamic.

A seal can be defined on ANY UTXO [even a non-existing / LN output]. Thus, a seal can potentially be defined on a dust UTXO too. This means, that the "real" economical value of spending this coin is in fact higher than only the transfer of the sats, as it also closes the seal and thus transfers ownership of the RGB asset.

Questions

  • Should dust coins be prohibited from defining a seal upon?
  • Are there any nuances to consider when closing a seal over a dust coin?
  • Are there any issues with full nodes who don't have that dust coin in memory?

Specify the exact rust-miniscript commit in LNPBP-0002

Currently the spec is pretty vague and only says that a script is valid if it can be parsed by Miniscript. We should instead specify which implementation should be used and the exact commit. I think we could still leave it open for now, but we have to before "v1".

Allow decentralized issuance for RGB contracts

Right now RGB contracts may have a single genesis (non-committed) and the rest of transitions must be committed with bitcoin transaction graph. For a decentralized issuance a non-committed multiple issuances are required; this can be implemented with allowing RGB contracts to have multiple (sub)genesis under particular genesis/schema

LNPBP-2: remove support for non-SegWit outputs

Originally we wanted to support all possible output types; however it may be improper to promote usage of non-SegWit output types. If some software is not supporting SegWit it is probably should be limited from using RGB as well (from political, not technical perspective).

Possible proofs for fungible assets pruning in RGB-20

There are some possibilities to make asset pruning verifiable; here I try to summarize them.

Zero-knowledge proofs

The proofs can be made with probabilistic checkable proofs procedure - or, potentially with bulletproofs and these proofs can be included as a binary data state attached to prune seal into pruning state transition.

The issuer during the pruning operation does usual verification process for the pruned assets (confidential amount verification and anchor verification). This process is then encoded as a Simplicity script with inputs used at each of its steps. Next, the issuer computes hash this script with its data and uses it to construct probabilistic checkable proof for 1 to 10% of the proof work (or a bulletproof). This part is serialized and supplied with prune state transition, so any party having these data may verify that the issuer was honest during the pruning process and had not created an asset inflation.

Pruning audit

Another alternative may be that the issuer adds to the pruning transition signatures of independent auditors confirming the correctness of the pruning operation. These auditors verify the complete pruning process with all source data.

The auditors may be

  • pre-defined in the issue procedure
  • parties selected by the issuer during the pruning
  • randomly selected from a set of existing public auditors using some sort of Fiat-Shamir heuristic (like we use some hash of the pruning transition created in such a way that the issuer can not maleate it)

In the latter case we may even use future RGB reputation schema to define the set of auditors in a decentralized fashion

Protocol framing issues in LNBP-1

Hi, in order to undertand the full scope of RGB, I was parsing through the LNBP protocols and rust-lnbp library. Below are some of the framing nits and protocol documentation issues I encountered in LNPBP-1. Probably I am missing something but It would be nice if they can be clarified:

Compute HMAC-SHA256 of the lnbp1_msg and P, named tweaking factor: f = HMAC_SHA256(s, P)

s is not defined here. As per the code, it means the lnbp1_msg. May be changing it here would be nice, or it can produce unnecessary confusion.

Same issue in the code documentation here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/pubkey.rs#L116-L117

Utilization of duplicated protocol tag hash prefix guarantees randomness in the first 64 bytes of the resulting tweaking string s, reducing probability for these bytes to be interpreted as a correct message under one of the previous standards.

  • same comment for s

  • The purpose of the protocol specific tag is to avoid equivalent interpretation of the same preimage between different contexts. I am not seeing how the prefix is guaranteeing randomness. At least no more random than padding the preimage with a constant. Maybe a different framing of the sentence can help?

  • The "duplicated protocol tag hash" in the above sentence seems to suggest that there is double tagged hash append happening (like Taproot specification). But the protocol only suggests a single tagged hash.

Which brings me to my last query,

As per the comment in https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/pubkey.rs#L136-L137

the message should be pre-tagged with the same protocol-specific hash, which then does explain the duplication part of the previous sentence, but LNPBP-1 doesn't specify anywhere that the message needs to be pre-tagged. It seems like an inconsistency between the doc and the implementation.

These are some of the minor issues I have found with LNPBP-1. It's also possible that I am interpreting these sentences wrong. Nevertheless opening this issue to initiate discussions on them.

In the coming few days I will be parsing through the rest of specifications and implementations and would report if I encounter anything.

Accountable pruning procedure for fungible assets schema (RGB-20)

Pruning procedure is a proposed mechanism for truncating amount of data clients need store for their assets history (and pass between them during asset transfer). The currently considered procedure for fungible asset schema allows an issuer to produce a new set of assets outside of a secondary issue process, and other parties should trust it that this assets are backed by some previously burned issued assets. @afilini pointed out in #20 that this is actually a "trick" that may allow shadowed secondary issuance for dishonest issuers and proposed to merge pruning and secondary issuance . Here I propose another workflow for asset history pruning with (eventual) provable protection from pruning misuse.

With the proposed procedure we define two types of seals: epoch and prune. Each issuance (primary and secondary) must have at most one epoch seal defined (with empty/no/void associated state); where the absence of an epoch seal will mean that pruning for this issuance is not supported. An epoch seal may be closed over a state transition of special type "epoch" (epoch transition), which may define a seal for the next epoch plus multiple prune seals, which will work as was planned before.

This will split pruning process into a pruning epochs, such that the amount of assets that was pruned may be tracked: any asset may be pruned within an epoch only once; and all users may track how much assets were pruned out of the issued supply. This will lead to the eventual consistency: once all of the issue was pruned, users will see that no new assets were produced by the issuer.

LNPBP-5: Unset value of tx position and input/output index fields

hello!
the two fields 32-47 and 49-63 use 0 to encode no value. This has a couple of drawbacks:

  • the coinbase transaction cannot be represented (as @dr-orlovsky specified);
  • the input/output index values need an aritmetic operation (-1) before be used as index.

I think that the encoding would be simpler if N is encoded as N, and INT_MAX(bits) is used to encode UNSET.

I would define the constants

  • UNSET_TX_INDEX = INT_MAX(16) = 0xffff
  • UNSET_IO_INDEX = INT_MAX(15) = 0x7fff

I would redefine the entity Block; On-chain

First bit set to 0, bits 32-47 set to UNSET_TX_INDEX, bit 48 set to 0, bits 49-63 set to UNSET_IO_INDEX

and the entity Transaction; On-chain to

First bit set to 0, bit 48 set to 0, bits 49-63 set to UNSET_IO_INDEX

Ticker set up when filling Genesis metadata

Designing RGB smart contracts table

Questions:

  1. Are we using ticker as a marketing tool or a unique identifier (as ’ticker+domain’ pair in Liquid)? Especially with regards to the identity case.
  2. Do we need to have limits on possible ticker values?
  3. Do we need the ticker to be provided by the issuer at genesis or it can be omited?

Adding multi-key commitments to LNPBP-1 from LNPBP-2

Right now public key commitments (tweaks) under LNPBP-1 and 2 use different tags. However, with key homomorphism and Taproot we will never know whether some key is composed of multiple keys; and we can see single-key commitments as an edge case of multi-key commitments with N=1. So the proposal is to standartize commitment procedure with arbitrary number of keys in LNPBP-1, and use LNPBP-2 only for describing of how this keys can be deterministically extracted from a given Bitcoin script.

LNPBP-1 says how to tweak (in a secure manner) single public key; LNPBP-2 says about tweaking scripts, so when multiple public keys in a script are found, we commit to their sum (while still tweak only one of them, pointed by the user). The procedure use different tags, however it seems that this is the same procedure if we just have a single key we still may to follow it with a simple fact that the “sum” of the key is the key itself. This will simplify code logic and looks like more strealined way of doing things

Scheme for adding public key tweak commitments into transaction outputs

Commit to a value with public keys (C2VPK): Generalized commitments based on public key tweaking

Motivation

Lightning network channel construction already lacks non-P2WSH outputs in its HTLC-success and HTLC-timeout transactions, making it impossible to utilize modern RGB and single use seal data for both milti-hop and direct payments/state updates. The specifics of the protocol is that it does require creation of HTLC-based output and related HTLC-output spending transactions even for a direct payments, so the support of P2WSH commitments becomes required issue to operate RGB assets over Lightning Network.

Moreover, the next changes to BOTL's assumes that commitment transactions will also be left without P2PKH outputs, since to_remote output will require CSV script option in order to fix existing misalignment of incentives during the channel close. Thus, it is required to enable P2WSH pay-to-contract commitments.

This specification proposes a generalized way to make a cryptographic commitments bases on pay-to-contract-style public key tweaking for any kind of transaction output, namely:

  • legacy P2PK
  • OP_RETURN and non-standard P2S
  • P2(W)PKH
  • P2(W)SH
  • P2WPKH and P2WSH wrapped into P2SH

Specification

To commit to a given message msg using elleptic-curve-based public key tweaking according to LNPBPS-0001 in a given transaction output a committing party MUST modify each of the public keys P withing all bitcoin scripts (scriptPubkey, redeemScript and witnessScript).

For OP_RETURN P2S variant, in a transaction output containing a OP_RETURN op-code the code must be followed by a 33 compressed tweaked public key TP computed with the algorithm described above. In this case the party can use any original public key for the tweaking procedure, which it can disclose lately to the parties to which it aims to reveal the commitment.

Rationale

Why we modify of all public keys

It is impossible to introduce a standard deterministic commitment for all possible output types and script variants that can be reliable used without the risk of multiple concurrent commitments placed into the same output.

Why the legacy P2PK and other non-standard script schemes are supported

The aim of this standard is to be as much universal as possible. While P2PK outputs are considered legacy due to a potential poor resistance to quantum computing attacks and arguably higher bitcoin blockchain footprint, we see no reason to create an exception from a standard for any legacy use case.

LNPBP-11, 12, 13: RGB public state transitions

Right now only parties owning some of RGB contract state (i.e. able to spend an UTXO with some assigned RGB state data, i.e. close a seal over next state) can update the state with state transitions. While this is very strong "anarcho-capitalistic" smart contract system, there are cases when independent third parties should be able to modify/interact with smart contract, for instance "decentralized issuance" (which can be used in bitcoin-backed RGB bitcoin derivatives, when anyone in the world can lock some bitcoins to some pre-defined Miniscript template-based UTXO and produce RGB asset). However, in order to have these assets interoperable (and issuance being decentralized) we need all these actions to happen under some single genesis, whence anyone can extend smart contract history after genesis with this issuance procedure.

Here I propose how this can be achieved with a simple RGB modification, which adds a lot of new use cases to the core of RGB (like "call a method of RGB smart contract").

First, we introduce that genesis and state transitions, additionally to state assignments with single-use seals, may contain (if Schema allows) a vector of so-called opened valencies. Each opened valencies is a "public extension point" of some Schema-defined type: it is not linked to any UTXO and anyone can create a RGB node of special structure (public extensions of RGB state) connected to such valences. Moreover, there is no limit on how many public extensions can connect to a single opened valences type (and such limits can't be enforced).

Public extensions has structure similar to genesis and state transitions structure with the following differences:

  • like genesis, they do not reference any ancestor seals (since there are none)
  • they link to a specific type of opened valences from previous state transition or genesis (which must define opened valences of that type, otherwise public extension is not valid)

Like any other type of RGB state history node (genesis and state transitions) they can have new state assignments with seals defined (thus, creating new state rather then updating existing one, like done in genesis), metadata and simplicity scripts; their structure and validation rules are defined with the Schema and Schema-provided simplicity scripts.

Similar to genesis (and unlike owned transitions) public extensions are not committed into bitcoin transactions, thus being ephemeral until some other owned transition closes one of it's seals/updates their state, after which they become committed into bitcoin transactions graph through that transition. In practice this means that publicity can update RGB contract history only if some of state owners is willing to accept those updates, which will include the new state into future subgraph starting from such RGB state history node.

Single-use seal primitive over Bitcoin blockchain

Single-use seal primitive over Bitcoin blockchain

Motivation

Single-use seal is a generic concept preventing double-definition or double-action (like double-spend), originally proposed by Peter Todd [1]. This specification defines how single-use seals can be constructed on top of bitcoin blockchain.

Background

Single use seal is a non-mathematical commitment primitive that allows to pick the fact after the commitment.

Single use seal (or seal) is a replicative state (or consensus?) primitive that allows to define a option for a future commitment (lazy commitment) to some value (that even can be not known now), such as:

  • the actual commitment to a message can happen once and only once;
  • the event of the promise fulfillment (i.e. the actual commitment being made) can be proved to independent parties.

Single-use seal may be used to define some value (or, with usage of cryptographic digests, message) to be defined in the future only once, and prevents an inconsistent view on a historical state. A party which would like to prove the event of the commitment must present a witness of the seal being closed over the message.

[implementation-specific]
The parties sharing the same single-use seal before it's creation must agree on a set of the global parameters, namely:

  • deterministic single location of the seal in that medium
  • cryptographic commitment function
  • deterministic definition of the witness for the seal closing

Specification

Single-use seal is defined as a tuple of parameters S_opened=(txid, vout, commitment_scheme, deterministic_commitment_txout_locator). When the seal is closed, it is closed over a value V (which may be just an integer value or a value of cryptographic hash function), so `S_closed=(S_opened, V).

struct Seal {
    txid: Txid,
    vout: u16,
}

trait Sealer<V> {
    fn close(&self, value: V, witness: &mut Transaction);
    fn verify(&self, value: V, bitcoin_blockchain: BitcoinFullNodeAPI) -> Result<V, Error>;

    // Used by the close function internally:
    fn commitment_applicator(&self,  commiting_txout: &mut TxOut);
    fn commitment_locator(&self, witness: &Transaction) -> &TxOut;
}

Screenshot 2019-10-12 at 12 15 57

References

1: https://petertodd.org/2017/scalable-single-use-seal-asset-transfer

Protocol Nits and issues for LNPBP-3

Suggested Nits:

Add to this number a previously-agreed values of s and c, or, if c was not defined, use 0 for c value
by default. This will give a commitment-factor x = a + s + c. Since s and c is a 8-bit numbers and a is a
32-bit number, the result MUST BE a 64-bit number, which will prevent any possible number overflows.
to ->
Add to this number a previously-agreed values of s and c, (if c was not defined, use 0 or the default value). This will give a commitment-factor x = a + s + c. Since s and c are 8-bit number and a is a
32-bit number, as a result max(x) wil be a 48-bit number which can be represented in a 64 bit integer without causing integer overflow.

Possible deviation from Impl.

Compute d as d = x mod n. The d will represent a number of transaction output which MUST contain a cryptographic commitment. All other transaction outputs under this protocol MUST NOT contain cryptographic commitments.

Let me know if I am missing something, but as per the implementation here, d is the transaction index, not the number of outputs.
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/tx.rs#L35-L38
possible restructure of the statement

Compute d as d = x mod n. The d will represent the index of the output which MUST contain a cryptographic commitment. All other transaction outputs under this protocol MUST NOT contain any cryptographic commitments.

State types for fungible assets

Currently there are 3 types of rights:

  1. Asset ownership
  2. Right to inflate
  3. Burning/pruning

Question

Should we add a right to change the metadata/identity related right?

Implement pruning with a secondary issuance for fungible assets RGB schema (RGB-20)

The current schema for the RGB1 and 2 protocols treats pruning and secondary issuances differently: in order to make a secondary issuance, the issuer would have to spend some "special" seals that would make it obvious to any external observer that a secondary issuance had happened, to prevent the issuer from inflating the supply of an asset undetected.

Pruning, which is essentially a burn-and-then-secondary-issuance, instead is implemented in a different way, that doesn't require the issuer to spend the "secondary issuance" seal, probably to avoid "alterting" all the external observer every time the pruning occurs.

However, since there's no cryptographic "constraint" that forces the issuer to burn and then re-issue the exact same amount of tokens, the issuer could use a pruning operation to inflate the supply of the asset, with the added bonus that it would be able to clain, and cryptographically prove, that it had never spent the secondary issuance seals.

Digital identity and reputation on RGB

Looking at RGB, I see much untapped potential for truly private digital identities and reputation infrastructure. RGB allows voluntary disclosure of provable facts. The disclosure can also be selective (and RGB allows to know whether the disclosure was full or selective). Hence, the whole history of somebody’s interaction with the RGB ecosystem can become a collection of cryptographic proofs on various topics of one’s daily life.

Some of my early thoughts on what can be implemented:

  1. Loyalty programs without registration requirements

So far, loyalty & customer retention programs were associated with much hassle on both buyer side (registration & key management) and seller side (setting up & managing the infrastructure). Now they can work seamlessly without registration, and even include “alliances of shops” without the need to exchange customer data between them. You can forget about GDPR too.

Example claim request: “prove that you had shopped with us or our partners at least once during the last month and receive a cashback”.

  1. Generation of transferrable, provable and sellable data

Your data collected by e.g. a fitness program will come in universally transferrable format. You can use this provable data with a different fitness program, with a third-party supplemental nutrition program, with your doctor, your personal trainer, your health insurance company, or even sell your data to a sports equipment company, or donate it to health institutions to promote research.

  1. Buying customer-related data, generated by third-party, directly from the customer

Databases (and especially the whole history of transactions) for good, paying customers of third parties are valuable. They allow targeted promotions (either through communication channels or through discount policies). They show patterns of behaviour that may have value for market research and product design. Certain aggregate data may have scientific or commercial value, like health monitoring or geolocation information. Now all data, generated by various sources, can be bought by interested parties directly from the customer.

Example claim request: “show all transactions that you have made with dentists / computer shops / restaurants”, “show all data collected about your heart rate by the third-party application HeartMonitor”, “show your full transactional history on the XYZ exchange”.

  1. Screening and risk-management for leisure activities (dating, festivals, events, parties, group trips & hikes)

There can be various user-defined standards, taking into account the quantity and the quality of provable past experiences.

Example claim request: “show that you have participated in at least three similar events in the last two years”.

Overall, developed ID and reputation systems on RGB will make the hoarding of data by businesses less relevant. Information will be stored privately with their owners, the sovereign individuals, and will be provided or sold to businesses, on the “on demand” basis. The role of government-affiliated certifiers will become limited, since provable data will be generated through the normal, daily interactions. For example, the provable fact of participating successfully in a series of hackathons can become a more meaningful proof of “sufficient age” than a passport certificate, and a better proof of “sufficient education” than a certified university degree.

Protecting new sealed state UTXO from disclosure during state transition

As a part of the security model the UTXO's for all closed single-use seals for the past bound states must be known, i.e. all future owners of some state data will know UTXOs of past owners. However, such leak is not required for a state transfer process: a party transferring some state to the future owner may assign it to a UTXO commitment, which will be disclosed by the new owner to all future owners, but will keep UTXO hidden from the previous owner, increasing the security.

LNPBP-2: use homomorphic public key commitments for P2(W)SH & custom scripts

It is proposed instead of tweaking all public keys within a scriptPubkey containing multiple public keys (like in P2(W)SH or custom scripts) to utilize homomorphic properties of the public keys and tweak any of them such that the commitment is verified against the sum of all public keys.

This procedure is protected from double commitment attacks and comparing to the previous version of LNPBP-2 with all pub key being tweaked is:

  • more efficient in coordination cost
  • protected from "last mover" attack.

It should be noted that the party performing commitment must know all public keys of other parties, so while it can be done only once for a given output (no double commitment) this fact can be hidden from other participants, which may be desirable for certain cases. For the cases when it is undesirable, participants may run some protocol similar to Taproot intermediate key preparation insuring that no one of them does the key tweaking without informing others.

RGB: Solving non-determinism in tx output order for witness transactions

Witness transaction, closing a single-use-seal with some assigned RGB state, must contain commitment to the new state. This is done by tweaking public key(s) in one of it's outputs according to LNPBP-3; the specific output is defined by some constant genesis-based factor (hash of genesis) and transaction fee. However this collides with deterministic transaction output ordering (BIP-69): if transaction contains two or more outputs with the same amount (like in CoinJoin), public key modification may change their order with 1 - 1/no_of_same_amont_outpus probability. However, re-ordering is impossible: untweaking key in one output and tweaking the other will change the order back, leading to potential deadlocks.

Possible solutions:

  1. Break deterministic ordering: negative impact on privacy in a small number of cases
  2. Re-tweak in cycle until the solutions is found; if deadlock is reached failback to variant 1
  3. Tweak all outputs with the same amount. Most private and deterministic, however will require wallets to maintain more additional information

Affected protocols:

  • Lightning channels
  • CoinJoin
  • Bitcoin Wallets implementing BIP-69 (very rare)

Rename: LNP-BP is a misleading name/brand for this project.

I suggest some renaming/rebranding of this project.

I really like the idea of a Bitcoin protocol stack modeled after TCP/IP. It is cool that LNP/BP has a very similar "ring" (sound) to TCP/IP.

But I think it is very confusing and misleading to name the set of specifications LNP/BP when the criteria for proposals is that they not be LNP or BP specifications.

I think the key word we are looking for might be protocol stack or protocol suite. Maybe someone like BPStack or BPSuite?

Single-asset LN channel design considerations

On RGBCon0 in Milano @renepickhardt proposed to use single-asset LN channels, which:

  • minimizes amount of LN message customization: in most cases we don't need TLVs and new types of messages, which is especially important for poorly-extensible gossips
  • simplifies invoicing
  • minimizes node code customization
  • avoids reverse American call option problem as a whole

Nevertheless, using single-asset LN channels for RGB has its own trade-offs: addition of each asset will require new channel funding transaction, which limits scalability. Here I'd like to discuss the ways how we can mitigate this issue.

Channel splitting

Use channel splitting: create a new transaction spending funding output, which will contain new version of funding outputs, one per asset. Don't publish this information on-chain.
Pros:

  • Fast and flexible way to open, close and distribute funds between multiple channels without any on-chain transactions
    Cons:
  • Not supported by any existing lightning node, requires full LNP-node
  • No standards for channel splitting in BOLTs: requires us to await for the standard — or work on it as a part of Generalized LN effort

Channel factories

Use channel factories: the same of above, but instead of splitting within the existing channel we use channel factory to create a new channel. The pros and cons are exactly like the above

Channel virtualization

Channel "superposition"/virtualization within the same commitment transaction: nodes operate multiple channels sharing single commitment transaction.
Cons:

  • channel updates must be synched/put in strict serial order
  • channels must share the same set of keys
  • probably not standard-compatible
  • low incentives for Lightning community to add support for the standards and existing software
    Pros:
  • potentially can be done with c-lightning

UDP hole punching support in Bifrost


Proposed initially by @prdn

Description



UDP hole punching protocol allows you to bypass the firewalls, and open port connections from behind - like in home environment or behind ISPs firewalls. The protocol is used by many projects, for example, in case you have a Rapsberry Pi at home and you would like to set up a bitcoin or any other nodes on it, you can use the technology to do that. 


This is important since RGB and other LNP/BP projects that require P2P communications outside of LN node connectivity, will leverage the Lightning Network protocol instead of building its custom P2P one. We are planing to use BOLT-8 and BOLT-1 for transfer, framing, authentication and other layers; it’s important to note that they will be used outside of the Lightning Network scope, with different port numbers. In this regard, it will be important to make a self-hosted RGB server used by RGB wallet accessible for cases when you have a home node on Rapsberry Pi. And that is where UDP hole punching can be useful.

Question to address

We need to understand more about the technology and whether it can be combined with Tor: are they complimentary or the problem is already solved by Tor itself?

Asset Pruning/Burning

Open questions:

  1. Which term to use?
  2. Do we need the ability to prohibit Burning/pruning?
  3. Do we need to restrict burning rights to a single UTXO only?
  4. Do we need to add proofs metadata to the pruning transaction?
  5. Do we need to allow the removal or pruning rights at all?

Possible Options:

  1. Disallow procedure for all RGB-20 fungible assets (remove from Schema).
  2. Leave as is.
  3. Add ability to add custom (of many possible types) proofs Allow asset issuers to mark them required in Genesis.
  4. Add epochs mechanism with eventual validation.
  5. Combination of 3 & 4.

Lightspeed micropayments for Lightning Network

Abstract

This work proposes concept of fast and reliable micropayments protocol
("Lightspeed micropayments for Lightning Network") which allows millions of
transactions per channel without any additional network traffic and any
per-transaction signature generation. This requires generalization of the
Lightning network (such that parties may be able to add additional outputs to
the commitment transactions) and RGB.

1. Background

1.1. Setup

One of the main design goals for the Lightning Network was an idea of
micropayments: fast repeating payments for small amounts (sometimes <1 satoshi)
between two parties.

However, the current Lightning Network design prevents effective micropayments
due to the number of factors, which we will examine on a model setup where
Party 1 (Client) has to pay to Party 2 (Server) for each API call.

The setup of the configuration will require two Lightning Nodes; we will examine
the simples case when there exists a direct channel between them:

+------------+                                +------------+
| Client app | <----------------------------> | Server app |
+------------+                                +------------+
     |                                                 |
+-------------+                              +-------------+
| Client's LN | <--------------------------> | Server's LN |  
+-------------+                              +-------------+

Fig. 1. Setup of micropayments on arbitrary server API calls.

Possible payment workflow is the following:

  • Server LN generates the invoice,
  • passes it to server app,
  • which sends it to the client app,
  • client app instantiates payment from Client's LN node
  • Client LN performs payment via Lightning channel, which requires at least
    three network requests: update_add_htlc, commitment_signed and
    revoke_and_ack
  • After fulfilling the payment client app calls API call on Server and waits
    till
  • the server app polls Server's LN for the payment fulfillment and notifies
    client that it is ready to provide it with the data.

1.2. Problem

So, the procedure takes at least:

  • generation of multiple signatures: 2 * (pending_htlc_count * 2 + 2 + 1) -
    two for the new HTLC, one for each of the existing HTLCs and one for the
    commitment transaction; on each side of the channel — i.e. at least
    6 signatures;
  • 4 inter-process communications (2 on each side) between application and
    Lightning Node;
  • 3 network requests/replies between Client and Server Lightning Nodes and
  • 2 network requests between Client and Server apps, including the actual
    API call.

As a result, each API call gets a significant delay required to perform all
network and interprocess communications and signatures. In fact, this renders
current Lightning Network implementation practically unusable for high-frequency
micropayments: they will require at least an order of magnitude more time to
perform than an original API call without an attached payment.

1.3. Issues that need to be fixed

  1. Reduce number of signatures (decrease CPU load on both sides)
  2. Reduce number of network communications (increase speed and reduce
    probability of protocol failure)
  3. Decouple payment from API request, i.e. by providing some information
    in the API call that may prove/ensure Server app about successful payment
    w/o querying Lightning Node for the confirmation.

2. Initial solution with Generalized Lightning Network protocol

Generalized Lightning Network protocol (gLNP) is a
concept for establishing and managing arbitrary
channels (bi-directional, multiparty) with arbitrary extensions to transaction
structure that are negotiated during the channel setup.

2.1. Solution specification

2.1.1. Channel setup

With gLNP we may extend commitment transaction by adding a new "funding" output
to which the Client will allocate some large multiple of per-API call payments
aside of the main channel funds. This allocation (Lightspeed funding, or
LSF) will be controlled by script in the same manned as the initial channel
funding transaction.

Additionally to LSF, parties agree and sign another transaction spending LSF
output, named micropayment transaction. This transaction contains multiple
equal outputs, one output per payment for a single API call. Each of these
outputs are controlled by a script locked to either Client-generated hash
pre-image or Client's time-locked private key. It is important to note, that
hash-spending script branch does not require signature by a private key and
require only hash preimage, at this stage known only to the Client; at the same
time each of the outputs MUST have different pre-image.

Funding output:          Commitment tx:
+-----------------+      +--------------+
| 2-of-2 multisig | <--- | to_local     |
+-----------------+      +--------------+
                         | to_remote    |      Micropayment tx:
                         +--------------+      +--------------------------+
                         | **LSF**      | <--- | Hash1-or-pubkey_timelock |
                         +--------------+      +--------------------------+
                         | HTLC's       |      | Hash2-or-pubkey_timelock |
                         +--------------+      +--------------------------+
                                               | Hash3-or-pubkey_timelock |
                                               +--------------------------+
                                               | Hash4-or-pubkey_timelock |
                                               +--------------------------+
                                               | ...                      |
                                               +--------------------------+

Fig. 2. Version 1 of Lightspeed-enabled channel structure (based on pure gLNP).

2.1.2. Workflow

With this setup Client app will have a list of N hash preimages, and Server
can be aware of the list of channel ids and corresponding hashes associated with
each client.

For each API call i < N Client adds information on i-th preimage to the call
parameters (or metadata, like a custom HTTP header). Server verifies that the
preimage corresponds to a known hash and if does provides Client with the
requested data.

This reduces the computational and network load for each API call:

  • from >6 down to 0 signatures
  • from 4 interprocess communications on Client and Server side down to 0
    communications
  • from 3 network requests between Client and Server Lightning nodes down to 0
    network requests
  • from 2 network communications between Client and Server app to a single
    API call.

I.e. the proposed scheme reduces all additional network and signature
generation load for Lightning micropayments, enabling them in a high-performant
manner.

2.2. Remaining problems

With all advantages the scheme still has some significant drawbacks:

  1. In case of non-cooperative channel closing the Sever will have to publish
    micropayment transaction, which size can be enormous (tens of thousands
    outputs), and it may cost more than the money earned by the Server for
    API calls.
  2. It can't efficiently work with subsatoshi payments.
  3. It is limited in the number of payments in a single micropayment transaction
    to the maximum number of outputs that can fit in the block (tens of thousands).
    This may be still inefficient for high-frequency API calls, which may counts
    in millions per hour (for instance if we talk about car paying for each meter
    of the highway, or other IoT-5G use cases for micropayments).

3. Final solution with RGB

These problems may be solved if the payments will shift from transaction-output
satoshi-based model into a sealed state RGB token-based model.

Let's assume that the Server app owner issues some "service tokens" using RGB,
which have a fixed value (in terms of either it's services or satoshis), or
are equivalent in volume to the amount of API calls it may serve.

Client byes amount of tokens required for performing desired number of server
API calls on the market and allocates them to the Lightning channel opened with
the Server into the LSF output. (NB: you may think about this as a
redeemable "pre-paid", since unlike most of "pre-paid" services in our case
unspent tokens can always be removed from the channel and sold on the market).

Now, instead of micropayment transaction spending LSF into a thousands of
outputs, in RGB-based version the micropayment transaction will have a single
output
controlled by a script releasing funds either to Server's public key
(instantly) - or to Client's public key with some timelock. At the same time,
Client constructs a client-validated RGB data structure, that defined N seals
(and N can be millions or more) that assign a per-request amount of tokens to
the same single micropayment transaction output. It is important to know that
each of these outputs has some unknown secret value (stored in RGB metadata),
some entropy, that makes impossible to Server to guess the data for the
client-validated RGB proof without getting this secret from the client.

With each request Client provides Server with a single secret value, so
the Server will be able to generate corresponding RGB proof and use a part of
the tokens sealed to micropayment transaction output.

Funding output:          Commitment tx:
+-----------------+      +--------------+
| 2-of-2 multisig | <--- | to_local     |
+-----------------+      +--------------+
                         | to_remote    |      Micropayment tx:
                         +--------------+      +--------------------------+
                         | **LSF**      | <--- | 2-of-2+timelock multisig | <-+
                         +--------------+      +--------------------------+   |
                         | HTLC's       |                                   Single-use
                         +--------------+                                   seals
                                               Client-validated RGB data:     |
                                               +--------------------------+   |
                                               | Secret1-locked tokens    | --+
                                               +--------------------------+   |
                                               | Secret2-locked tokens    | --+
                                               +--------------------------+   |
                                               | Secret3-locked tokens    | --+
                                               +--------------------------+   |
                                               | Secret4-locked tokens    | --+
                                               +--------------------------+   |
                                               | ...                      | --+
                                               +--------------------------+

Fig. 3. Version 2 of Lightspeed-enabled channel structure (based on gLNP+RGB).

4. Security analysis and further work

The proposed final schema utilizing both gLNP and RGB-based tokens allows fast
and reliable micropayments ("Lightspeed") without counterparty risk within the
payment protocol. It still does have a counterparty risk of central token issuer
withholding full pre-paid amount, however this risk can be substantially reduced
in cases when the utility tokens used for API calls are free-tradeble on the
market and/or have a fixed value.

For non-cooperative channel closings the Server has ability to leave the Client
without any of the remaining tokens, however even in this case the Server will
not have a direct economical benefit: Client's tokens which were not already
paid to the server will be simply lost forever; at the same time Server owner
will risk it's reputation, so this kind of attack can be used for exit-scam only
and do not provide additional risk: exit-scam can be done in a more simple way
by Server seizuring all pre-payment funds and stopping from providing the
API services, which leaves us with the case of non-protocol counterparty risk
already described in the previous paragraph.

Interestingly the non-cooperative case of Client's token loss can be also
technically mitigated with more complex designs providing two symmetric outputs
for the micropayment transaction; however we believe that while it does increase
complexity of the scheme it does not in fact adds to safety due to the arguments
given above.

License

Creative Commons License


This work is licensed under a Creative Commons Attribution 4.0 International License.

Replace complex ec operations with a hash in LNPBP-0004

The fifth step of LNPBP-0004 says that

For each of the slots that remain empty (the slot number is represented by j):

  • tweak public key R with it's own hash H(R) j times: Rj = R + J * H(R) * G)
  • compute a 256-bit bitcoin hash of Rj and serialize it into the slot j using bitcoin-style hash serialization format.

Where R is entropy * G.

I think we could replace that with a simple SHA256(entropy || j)

Should we keep the possibility of witness txn to contain the output which has an asset that Alice transfers to Bob?

Witness transaction - a txn that contains the commitment; is a part of single use seal witness.

Intro

RGB uses bitcoin transactions in 2 ways:
1st - to put commitments to what happens with RGB off-chain (commit to each transfer which is also called 'state transition') into the bitcoin transaction.
2nd - to use UTXO to allocate some state to, in particular to allocate some assets to this output. And when this output is spent, it means that Alice actually did a transfer of that asset and the txn that spends this output, must contain the commitment (meaning it should be a witness txn).

Basically Alice is using 2 bitcoin transactions: one she needs to assign asset to, other - the witness transaction. Key idea from the early days was to combine these 2 transactions into 1, meaning that the witness transaction may contain the output which contains the asset that Alice is transferring to Bob, meaning that Alice assigns the asset not to some other existing transaction output, but to the output that will be created by Alice. 


The question is if we need to keep this opportunity?

Raised during the dev call on June 24th, 2020

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.