lnp-bp / lnpbps Goto Github PK
View Code? Open in Web Editor NEWLNP/BP standards for bitcoin layer 2 & 3 protocols
Home Page: https://standards.lnp-bp.org
LNP/BP standards for bitcoin layer 2 & 3 protocols
Home Page: https://standards.lnp-bp.org
Some computers can do 23GH/sec of SHA256 which would potentially take it around 1h to break in atm.
Switch to 48bits and have additional 4 bytes per outpoint.
Raised during the dev call on June 24th, 2020
The following are some few nits and questions I came up with while reviewing LNPBP-2.
With the following updates [11] LN will likely will change a single P2WPKH (named to_remote)...
to ->
With the following updates [11] LN will likely change a single P2WPKH (named to_remote)...
an arbitrary public key (even with an unknown corresponding private key), if
OP_RETURN
scriptPubkey
is used;
to ->
an arbitrary public key (even with an unknown corresponding private key), ifOP_RETURN
typescriptPubkey
is used;
Computes en elliptic curve point and its corresponding public key
S
as sum of elliptic curve points corresponding
to the original public keys.
to ->
Computes a public keyS
as the sum of original public keys. (Rationale: as elliptic curve points are the public key, repeating them twice seems redundant).
on the second step of the protocol (according to LNPBP-1 specification [12]) HMAC-SHA256 is provided with the value
ofS
(instead ofP
), i.e. hence we commit to the sum of all the original public keys;
Maybe adding the Hmac equation here HMAC_SHA256(SHA256("LNPBP2") || SHA256(<protocol-specific-tag>) || msg, S)
can be more expressive.
on the forth step of the protocol the tweaking-factor based point
F
is added to theP
(a single original public
key), resulting in keyT
to ->
on the fourth step of the protocol the tweaking-factor based pointF
is added to theP
(a single original public
key), resulting in keyT
, i.e.T = P + G * HMAC_SHA256(SHA256("LNPBP2") || SHA256(<protocol-specific-tag>) || msg, S)
Constructs and stores an extra-transaction proof (ETP), which structure depends on the generated
scriptPubkey
type:
to ->
Constructs and stores an extra-transaction proof (ETP), which is a structure that depends on the generatedscriptPubkey
type and consists of:
a)....
b)...
There is also a minor markup formatting issue here, making the resulting paragraph structure bit confusing.
The revel protocol is usually run between the committing and verifying parties; however it may be used by the
committing party to publicaly revel the proofs of the commitment. These proofs include:
to ->
The revel protocol is usually run between the committing and verifying parties; however, it may be used by the
committing party to publicly revel the proofs of the commitment. These proofs include:
The proposed cryptographic commitment scheme is fully compatible with any other LNPBP1-based commitments for the
case of P2PK, P2PKH and P2WPH transaction outputs, since they always contain only a single public key and the original
to ->
The proposed cryptographic commitment scheme is fully compatible with any other LNPBP1-based commitments for the
case of P2PK, P2PKH and P2WPKH transaction outputs, since they always contain only a single public key and the original
The author does not aware of any P2(W)SH or non-OP_RETURN P2S cryptographic commitment schemes existing before this
to ->
The author is not aware of any P2(W)SH or non-OP_RETURN P2S cryptographic commitment schemes existing before this
Reference implementation
https://github.com/LNP-BP/rust-lnpbp/blob/master/src/cmt/txout.rs
It would be better to give https://github.com/LNP-BP/rust-lnpbp/blob/master/src/bp/dbc/lockscript.rs & https://github.com/LNP-BP/rust-lnpbp/blob/master/src/bp/dbc/keyset.rs as the reference implementation link as this is where the majority of the LNPBP-02 logics are lying.
Constructs necessary scripts and generates
scriptPubkey
of the required type. If OP_RETURNscriptPubkey
format is
used, it MUST be serialized according to the following rules:
- only a single
OP_RETURN
code MUST be present in thescriptPubkey
and it MUST be the first byte of it;- it must be followed by 32-byte push of the public key value
P
from the step 2 of the algorithm, serialized
according to from [15]; if the resulting public key containing the commitment is non-square, a new public key MUST
be picked and procedure from step 4 MUST BE repeated once more.
It would be nice to explain the rationale of requiring squared public key specifically for OP_RETURN type outputs as it's not enforced in any other part of the protocol.
In the implementation here https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/scriptpubkey.rs#L239
this behavior is not reproduced as the pubkey serialization output is in compressed form.
TODO: Schnorr compatibility
If I understand correctly in order to be schnorr compatible we need to enforce all the pubkeys to be squared. do we need any other consideration also?
The composition assignment for P2PK and P2PKH are reversed here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/scriptpubkey.rs#L111-L112
if the number of original public keys defined at step 1 exceeds one, LNPBP2 tag MUST BE used instead of LNPBP1
This behaviour is not reproduced in the code here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/keyset.rs#L122
The tag is kept as LNPBP-1
.
So these are some minor issues I observed in the specification and the implementation and they provide some scope of improvement. If consensus on the suggested changes is achieved I can start making small PRs to fix them.
There are two options to enumerate all public keys within Bitcoin script (redeem script or custom script within scriptPubkey):
OP_CHECKSIG[VALIDATE]
)It is proposed to stick to the latest option since it:
There multiple different types of API that can be used by the software stack
related to LNP/BP. Here we analyze criteria to choose the proper API
technologies and serialization standards for different cases.
In general, software might require API for:
API type | Sample use cases | Typical scenarios |
---|---|---|
IPC | c-lightning IPC | Microservice IPC for servers and daemons |
Non-web client-server | electrum protocol; bitcoind RPC | High-throughput or non-custodial solutions |
Web-based REST | esplora | Blockchain explorers |
Web-based RPC/RT | many web apps | Wallets |
Today, many different API description languages, serialization formats and
transport layers exist that may be used in the mentioned scenarios. However, in
most of the cases the choice of the particular formats are nearly arbitrary or
related to historical reasons. Here I'd like to systematize criteria for API
technic selection in LNP/BP for future apps that may allow to avoid many bad
practices of the past.
The classical API consists of three main components:
Many existing API automation frameworks (see below) cover more than a single
API component.
Here we provide information only about modern and most recently used frameworks:
Framework name / protocol family | Layers | Transport protocol requirements | Best suited/designed for |
---|---|---|---|
Apache Thrift | 1 (many), 2 (RPC), 3 (custom) | HTTP(s), TCP | Microservice architectures (only Req/Rep however) |
GraphQL | 1 (JSON), 2 (custom) | HTTP(s) | Complex data-centric web applications with non-hierarchical data graphs |
gRPC/Protobuf | 1 (binary/custom), 2 (RPC) | HTTP(s), TCP, ? | Microservice architectures (only Req/Rep however) |
JSON-RPC | 1 (JSON), 2 (RPC) | HTTP | Legacy/insecure |
OpenAPI | 1 (JSON), 2 (REST) | HTTP(s) | REST web applications |
SOAP/WSDL | 1 (XML), 2 (RPC) | HTTP | Enterprise system bus-centered enterprise architectures |
WAMP | 1 (JSON or other), 2 (RPC) | Websockets, TCP, POSIX | Real-time web apps, socket-based apps |
XML-RPC | 1 (XML), 2 (RPC) | HTTP | Legacy/insecure |
ZeroMQ | 1 (binary/custom), 2 (RPC) | POSIX sockets, POSIX IPC, TCP, USD | High throughput, Pub/Subs, IPCs, Microservice architectures |
The requirements for this are:
Much less important for the protocols:
ZeroMQ seems to be a tool of choice for the transport layer, which have to be
combined with custom RPC API DSL and serialization protocol.
ZeroMQ seems to be the tool of the choice here as well
OpenAPI seems to be the tool of the choice.
WAMP seems to be the tool of the choice for apps that require live updates
(Websockets).
Another alternative to consider is GraphQL, however it should be noted that id
usually has a poor performance and is not suited for Websocket apps.
Protocol buffers or Apache Thrift serialization can't be used in all of the cases due to:
Original: https://github.com/dr-orlovsky/notes/blob/master/api_design.md
As was discussed during the dev call on 15th Jul 2020, use cases for different forms of tokens (stable coins, shares) will benefit from having separated schema. Here I propose to define the following set of schema for all asset-related cases:
All of these schemas will be proxied by a single asset
RGB contract daemon (inside RGB node) with a single standard API for reducing the complexity of integration from a wallet perspective.
(updated)
I think the best way for RGB versioning is to have LN-like set of features, with even/odd differentiation, and commit to it in a Schema.
This version will define not the Schema version, but version of RGB protocol as a whole, i.e. how the schema data and all smart contract data issued under this schema will be serialized and interpreted. This will allow RGB updates such as addition of Simplicity language etc.
With a client-side validation we prohibit to change anything in terms how protocol works (commitment rules, validation rules etc) once the contract is created. But RGB a sa whole is a set of absolutely unrelated contracts, so I understood that while a single contract can’t be “upgraded”, nothing prevents in a client-validated paradigm to have protocol versioning/features for different contracts, so new contracts can be created under RGBv2 for instance (providinb mimblewimble aggregation/whatever)
A dust UTXO is uneconomical to spend, meaning that the value of sats it contains is below the mining fee in sat/vbyte that is required to spend it again. In order to spend the coin, it has to be consolidated with at least one other coin, that pays for the left-over transaction fee deficit.
IIRC, Bitcoin Core / Knots nodes remove such a coin from their UTXO set in memory, as it is not expected [but still possible] that it will be spent.
However, RGB changes this dynamic.
A seal can be defined on ANY UTXO [even a non-existing / LN output]. Thus, a seal can potentially be defined on a dust UTXO too. This means, that the "real" economical value of spending this coin is in fact higher than only the transfer of the sats, as it also closes the seal and thus transfers ownership of the RGB asset.
Currently the spec is pretty vague and only says that a script is valid if it can be parsed by Miniscript. We should instead specify which implementation should be used and the exact commit. I think we could still leave it open for now, but we have to before "v1".
Right now RGB contracts may have a single genesis (non-committed) and the rest of transitions must be committed with bitcoin transaction graph. For a decentralized issuance a non-committed multiple issuances are required; this can be implemented with allowing RGB contracts to have multiple (sub)genesis under particular genesis/schema
Originally we wanted to support all possible output types; however it may be improper to promote usage of non-SegWit output types. If some software is not supporting SegWit it is probably should be limited from using RGB as well (from political, not technical perspective).
There are some possibilities to make asset pruning verifiable; here I try to summarize them.
The proofs can be made with probabilistic checkable proofs procedure - or, potentially with bulletproofs and these proofs can be included as a binary data state attached to prune seal into pruning state transition.
The issuer during the pruning operation does usual verification process for the pruned assets (confidential amount verification and anchor verification). This process is then encoded as a Simplicity script with inputs used at each of its steps. Next, the issuer computes hash this script with its data and uses it to construct probabilistic checkable proof for 1 to 10% of the proof work (or a bulletproof). This part is serialized and supplied with prune state transition, so any party having these data may verify that the issuer was honest during the pruning process and had not created an asset inflation.
Another alternative may be that the issuer adds to the pruning transition signatures of independent auditors confirming the correctness of the pruning operation. These auditors verify the complete pruning process with all source data.
The auditors may be
In the latter case we may even use future RGB reputation schema to define the set of auditors in a decentralized fashion
Hi, in order to undertand the full scope of RGB, I was parsing through the LNBP protocols and rust-lnbp library. Below are some of the framing nits and protocol documentation issues I encountered in LNPBP-1. Probably I am missing something but It would be nice if they can be clarified:
Compute HMAC-SHA256 of the lnbp1_msg and P, named tweaking factor: f = HMAC_SHA256(s, P)
s
is not defined here. As per the code, it means the lnbp1_msg
. May be changing it here would be nice, or it can produce unnecessary confusion.
Same issue in the code documentation here
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/pubkey.rs#L116-L117
Utilization of duplicated protocol tag hash prefix guarantees randomness in the first 64 bytes of the resulting tweaking string s, reducing probability for these bytes to be interpreted as a correct message under one of the previous standards.
same comment for s
The purpose of the protocol specific tag is to avoid equivalent interpretation of the same preimage between different contexts. I am not seeing how the prefix is guaranteeing randomness. At least no more random than padding the preimage with a constant. Maybe a different framing of the sentence can help?
The "duplicated protocol tag hash" in the above sentence seems to suggest that there is double tagged hash append happening (like Taproot specification). But the protocol only suggests a single tagged hash.
Which brings me to my last query,
As per the comment in https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/pubkey.rs#L136-L137
the message should be pre-tagged with the same protocol-specific hash, which then does explain the duplication part of the previous sentence, but LNPBP-1 doesn't specify anywhere that the message needs to be pre-tagged. It seems like an inconsistency between the doc and the implementation.
These are some of the minor issues I have found with LNPBP-1. It's also possible that I am interpreting these sentences wrong. Nevertheless opening this issue to initiate discussions on them.
In the coming few days I will be parsing through the rest of specifications and implementations and would report if I encounter anything.
Pruning procedure is a proposed mechanism for truncating amount of data clients need store for their assets history (and pass between them during asset transfer). The currently considered procedure for fungible asset schema allows an issuer to produce a new set of assets outside of a secondary issue process, and other parties should trust it that this assets are backed by some previously burned issued assets. @afilini pointed out in #20 that this is actually a "trick" that may allow shadowed secondary issuance for dishonest issuers and proposed to merge pruning and secondary issuance . Here I propose another workflow for asset history pruning with (eventual) provable protection from pruning misuse.
With the proposed procedure we define two types of seals: epoch
and prune
. Each issuance (primary and secondary) must have at most one epoch
seal defined (with empty/no/void associated state); where the absence of an epoch
seal will mean that pruning for this issuance is not supported. An epoch
seal may be closed over a state transition of special type "epoch" (epoch transition
), which may define a seal for the next epoch plus multiple prune
seals, which will work as was planned before.
This will split pruning process into a pruning epochs, such that the amount of assets that was pruned may be tracked: any asset may be pruned within an epoch only once; and all users may track how much assets were pruned out of the issued supply. This will lead to the eventual consistency: once all of the issue was pruned, users will see that no new assets were produced by the issuer.
hello!
the two fields 32-47 and 49-63 use 0 to encode no value. This has a couple of drawbacks:
I think that the encoding would be simpler if N is encoded as N, and INT_MAX(bits) is used to encode UNSET.
I would define the constants
I would redefine the entity Block; On-chain
First bit set to 0, bits 32-47 set to UNSET_TX_INDEX, bit 48 set to 0, bits 49-63 set to UNSET_IO_INDEX
and the entity Transaction; On-chain
to
First bit set to 0, bit 48 set to 0, bits 49-63 set to UNSET_IO_INDEX
Right now public key commitments (tweaks) under LNPBP-1 and 2 use different tags. However, with key homomorphism and Taproot we will never know whether some key is composed of multiple keys; and we can see single-key commitments as an edge case of multi-key commitments with N=1. So the proposal is to standartize commitment procedure with arbitrary number of keys in LNPBP-1, and use LNPBP-2 only for describing of how this keys can be deterministically extracted from a given Bitcoin script.
LNPBP-1 says how to tweak (in a secure manner) single public key; LNPBP-2 says about tweaking scripts, so when multiple public keys in a script are found, we commit to their sum (while still tweak only one of them, pointed by the user). The procedure use different tags, however it seems that this is the same procedure if we just have a single key we still may to follow it with a simple fact that the “sum” of the key is the key itself. This will simplify code logic and looks like more strealined way of doing things
Lightning network channel construction already lacks non-P2WSH outputs in its HTLC-success and HTLC-timeout transactions, making it impossible to utilize modern RGB and single use seal data for both milti-hop and direct payments/state updates. The specifics of the protocol is that it does require creation of HTLC-based output and related HTLC-output spending transactions even for a direct payments, so the support of P2WSH commitments becomes required issue to operate RGB assets over Lightning Network.
Moreover, the next changes to BOTL's assumes that commitment transactions will also be left without P2PKH outputs, since to_remote
output will require CSV script option in order to fix existing misalignment of incentives during the channel close. Thus, it is required to enable P2WSH pay-to-contract commitments.
This specification proposes a generalized way to make a cryptographic commitments bases on pay-to-contract-style public key tweaking for any kind of transaction output, namely:
To commit to a given message msg
using elleptic-curve-based public key tweaking according to LNPBPS-0001 in a given transaction output a committing party MUST modify each of the public keys P
withing all bitcoin scripts (scriptPubkey, redeemScript and witnessScript).
For OP_RETURN
P2S variant, in a transaction output containing a OP_RETURN op-code the code must be followed by a 33 compressed tweaked public key TP
computed with the algorithm described above. In this case the party can use any original public key for the tweaking procedure, which it can disclose lately to the parties to which it aims to reveal the commitment.
Why we modify of all public keys
It is impossible to introduce a standard deterministic commitment for all possible output types and script variants that can be reliable used without the risk of multiple concurrent commitments placed into the same output.
Why the legacy P2PK and other non-standard script schemes are supported
The aim of this standard is to be as much universal as possible. While P2PK outputs are considered legacy due to a potential poor resistance to quantum computing attacks and arguably higher bitcoin blockchain footprint, we see no reason to create an exception from a standard for any legacy use case.
Right now only parties owning some of RGB contract state (i.e. able to spend an UTXO with some assigned RGB state data, i.e. close a seal over next state) can update the state with state transitions. While this is very strong "anarcho-capitalistic" smart contract system, there are cases when independent third parties should be able to modify/interact with smart contract, for instance "decentralized issuance" (which can be used in bitcoin-backed RGB bitcoin derivatives, when anyone in the world can lock some bitcoins to some pre-defined Miniscript template-based UTXO and produce RGB asset). However, in order to have these assets interoperable (and issuance being decentralized) we need all these actions to happen under some single genesis, whence anyone can extend smart contract history after genesis with this issuance procedure.
Here I propose how this can be achieved with a simple RGB modification, which adds a lot of new use cases to the core of RGB (like "call a method of RGB smart contract").
First, we introduce that genesis and state transitions, additionally to state assignments with single-use seals, may contain (if Schema allows) a vector of so-called opened valencies. Each opened valencies is a "public extension point" of some Schema-defined type: it is not linked to any UTXO and anyone can create a RGB node of special structure (public extensions of RGB state) connected to such valences. Moreover, there is no limit on how many public extensions can connect to a single opened valences type (and such limits can't be enforced).
Public extensions has structure similar to genesis and state transitions structure with the following differences:
Like any other type of RGB state history node (genesis and state transitions) they can have new state assignments with seals defined (thus, creating new state rather then updating existing one, like done in genesis), metadata and simplicity scripts; their structure and validation rules are defined with the Schema and Schema-provided simplicity scripts.
Similar to genesis (and unlike owned transitions) public extensions are not committed into bitcoin transactions, thus being ephemeral until some other owned transition closes one of it's seals/updates their state, after which they become committed into bitcoin transactions graph through that transition. In practice this means that publicity can update RGB contract history only if some of state owners is willing to accept those updates, which will include the new state into future subgraph starting from such RGB state history node.
The proposed split between burn and replacement procedures RGB-WG/rgb-node#19 allows a nice additional feature: ability for the issuer to replace lost assets.
Implementation tracking issue: RGB-WG/rgb-node#69
I created rgb-archive/spec#117 in the old repo to add a reference and link to this repo. Between the misleading name (#9) and the old repo not providing forwarding links, I initially didn't give this repo the attention it deserves.
Single-use seal is a generic concept preventing double-definition or double-action (like double-spend), originally proposed by Peter Todd [1]. This specification defines how single-use seals can be constructed on top of bitcoin blockchain.
Single use seal is a non-mathematical commitment primitive that allows to pick the fact after the commitment.
Single use seal (or seal) is a replicative state (or consensus?) primitive that allows to define a option for a future commitment (lazy commitment) to some value (that even can be not known now), such as:
Single-use seal may be used to define some value (or, with usage of cryptographic digests, message) to be defined in the future only once, and prevents an inconsistent view on a historical state. A party which would like to prove the event of the commitment must present a witness of the seal being closed over the message.
[implementation-specific]
The parties sharing the same single-use seal before it's creation must agree on a set of the global parameters, namely:
Single-use seal is defined as a tuple of parameters S_opened=(txid, vout, commitment_scheme, deterministic_commitment_txout_locator)
. When the seal is closed, it is closed over a value V
(which may be just an integer value or a value of cryptographic hash function), so `S_closed=(S_opened, V).
struct Seal {
txid: Txid,
vout: u16,
}
trait Sealer<V> {
fn close(&self, value: V, witness: &mut Transaction);
fn verify(&self, value: V, bitcoin_blockchain: BitcoinFullNodeAPI) -> Result<V, Error>;
// Used by the close function internally:
fn commitment_applicator(&self, commiting_txout: &mut TxOut);
fn commitment_locator(&self, witness: &Transaction) -> &TxOut;
}
1: https://petertodd.org/2017/scalable-single-use-seal-asset-transfer
Use a metadata field to define a minimum division for asset payments inside LN channels (equivalent to msat)
Add to this number a previously-agreed values of
s
andc
, or, ifc
was not defined, use0
forc
value
by default. This will give a commitment-factorx = a + s + c
. Sinces
andc
is a 8-bit numbers anda
is a
32-bit number, the result MUST BE a 64-bit number, which will prevent any possible number overflows.
to ->
Add to this number a previously-agreed values ofs
andc
, (ifc
was not defined, use0
or the default value). This will give a commitment-factorx = a + s + c
. Sinces
andc
are 8-bit number anda
is a
32-bit number, as a resultmax(x)
wil be a 48-bit number which can be represented in a 64 bit integer without causing integer overflow.
Compute d as d = x mod n. The d will represent a number of transaction output which MUST contain a cryptographic commitment. All other transaction outputs under this protocol MUST NOT contain cryptographic commitments.
Let me know if I am missing something, but as per the implementation here, d
is the transaction index, not the number of outputs.
https://github.com/LNP-BP/rust-lnpbp/blob/2f6fee732417aad5c71fcd0120ed0db4b1e61061/src/bp/dbc/tx.rs#L35-L38
possible restructure of the statement
Compute d as d = x mod n. The d will represent the index of the output which MUST contain a cryptographic commitment. All other transaction outputs under this protocol MUST NOT contain any cryptographic commitments.
Should we add a right to change the metadata/identity related right?
The current schema for the RGB1 and 2 protocols treats pruning and secondary issuances differently: in order to make a secondary issuance, the issuer would have to spend some "special" seals that would make it obvious to any external observer that a secondary issuance had happened, to prevent the issuer from inflating the supply of an asset undetected.
Pruning, which is essentially a burn-and-then-secondary-issuance, instead is implemented in a different way, that doesn't require the issuer to spend the "secondary issuance" seal, probably to avoid "alterting" all the external observer every time the pruning occurs.
However, since there's no cryptographic "constraint" that forces the issuer to burn and then re-issue the exact same amount of tokens, the issuer could use a pruning operation to inflate the supply of the asset, with the added bonus that it would be able to clain, and cryptographically prove, that it had never spent the secondary issuance seals.
Looking at RGB, I see much untapped potential for truly private digital identities and reputation infrastructure. RGB allows voluntary disclosure of provable facts. The disclosure can also be selective (and RGB allows to know whether the disclosure was full or selective). Hence, the whole history of somebody’s interaction with the RGB ecosystem can become a collection of cryptographic proofs on various topics of one’s daily life.
Some of my early thoughts on what can be implemented:
So far, loyalty & customer retention programs were associated with much hassle on both buyer side (registration & key management) and seller side (setting up & managing the infrastructure). Now they can work seamlessly without registration, and even include “alliances of shops” without the need to exchange customer data between them. You can forget about GDPR too.
Example claim request: “prove that you had shopped with us or our partners at least once during the last month and receive a cashback”.
Your data collected by e.g. a fitness program will come in universally transferrable format. You can use this provable data with a different fitness program, with a third-party supplemental nutrition program, with your doctor, your personal trainer, your health insurance company, or even sell your data to a sports equipment company, or donate it to health institutions to promote research.
Databases (and especially the whole history of transactions) for good, paying customers of third parties are valuable. They allow targeted promotions (either through communication channels or through discount policies). They show patterns of behaviour that may have value for market research and product design. Certain aggregate data may have scientific or commercial value, like health monitoring or geolocation information. Now all data, generated by various sources, can be bought by interested parties directly from the customer.
Example claim request: “show all transactions that you have made with dentists / computer shops / restaurants”, “show all data collected about your heart rate by the third-party application HeartMonitor”, “show your full transactional history on the XYZ exchange”.
There can be various user-defined standards, taking into account the quantity and the quality of provable past experiences.
Example claim request: “show that you have participated in at least three similar events in the last two years”.
Overall, developed ID and reputation systems on RGB will make the hoarding of data by businesses less relevant. Information will be stored privately with their owners, the sovereign individuals, and will be provided or sold to businesses, on the “on demand” basis. The role of government-affiliated certifiers will become limited, since provable data will be generated through the normal, daily interactions. For example, the provable fact of participating successfully in a series of hackathons can become a more meaningful proof of “sufficient age” than a passport certificate, and a better proof of “sufficient education” than a certified university degree.
As a part of the security model the UTXO's for all closed single-use seals for the past bound states must be known, i.e. all future owners of some state data will know UTXOs of past owners. However, such leak is not required for a state transfer process: a party transferring some state to the future owner may assign it to a UTXO commitment, which will be disclosed by the new owner to all future owners, but will keep UTXO hidden from the previous owner, increasing the security.
Maybe it makes sense to talk to the Chris stewart from suredbits who has just started this repo: https://github.com/discreetlogcontracts/dlcspecs I think DLCs especially with respect to this issue discreetlogcontracts/dlcspecs#3 are very relevant to RGB stuff
With support on LN URL and multiple failback payment methods
It is proposed instead of tweaking all public keys within a scriptPubkey containing multiple public keys (like in P2(W)SH or custom scripts) to utilize homomorphic properties of the public keys and tweak any of them such that the commitment is verified against the sum of all public keys.
This procedure is protected from double commitment attacks and comparing to the previous version of LNPBP-2 with all pub key being tweaked is:
It should be noted that the party performing commitment must know all public keys of other parties, so while it can be done only once for a given output (no double commitment) this fact can be hidden from other participants, which may be desirable for certain cases. For the cases when it is undesirable, participants may run some protocol similar to Taproot intermediate key preparation insuring that no one of them does the key tweaking without informing others.
Witness transaction, closing a single-use-seal with some assigned RGB state, must contain commitment to the new state. This is done by tweaking public key(s) in one of it's outputs according to LNPBP-3; the specific output is defined by some constant genesis-based factor (hash of genesis) and transaction fee. However this collides with deterministic transaction output ordering (BIP-69): if transaction contains two or more outputs with the same amount (like in CoinJoin), public key modification may change their order with 1 - 1/no_of_same_amont_outpus probability
. However, re-ordering is impossible: untweaking key in one output and tweaking the other will change the order back, leading to potential deadlocks.
I suggest some renaming/rebranding of this project.
I really like the idea of a Bitcoin protocol stack modeled after TCP/IP. It is cool that LNP/BP has a very similar "ring" (sound) to TCP/IP.
But I think it is very confusing and misleading to name the set of specifications LNP/BP when the criteria for proposals is that they not be LNP or BP specifications.
I think the key word we are looking for might be protocol stack or protocol suite. Maybe someone like BPStack or BPSuite?
On RGBCon0 in Milano @renepickhardt proposed to use single-asset LN channels, which:
Nevertheless, using single-asset LN channels for RGB has its own trade-offs: addition of each asset will require new channel funding transaction, which limits scalability. Here I'd like to discuss the ways how we can mitigate this issue.
Use channel splitting: create a new transaction spending funding output, which will contain new version of funding outputs, one per asset. Don't publish this information on-chain.
Pros:
Use channel factories: the same of above, but instead of splitting within the existing channel we use channel factory to create a new channel. The pros and cons are exactly like the above
Channel "superposition"/virtualization within the same commitment transaction: nodes operate multiple channels sharing single commitment transaction.
Cons:
Proposed initially by @prdn
UDP hole punching protocol allows you to bypass the firewalls, and open port connections from behind - like in home environment or behind ISPs firewalls. The protocol is used by many projects, for example, in case you have a Rapsberry Pi at home and you would like to set up a bitcoin or any other nodes on it, you can use the technology to do that.
This is important since RGB and other LNP/BP projects that require P2P communications outside of LN node connectivity, will leverage the Lightning Network protocol instead of building its custom P2P one. We are planing to use BOLT-8 and BOLT-1 for transfer, framing, authentication and other layers; it’s important to note that they will be used outside of the Lightning Network scope, with different port numbers. In this regard, it will be important to make a self-hosted RGB server used by RGB wallet accessible for cases when you have a home node on Rapsberry Pi. And that is where UDP hole punching can be useful.
We need to understand more about the technology and whether it can be combined with Tor: are they complimentary or the problem is already solved by Tor itself?
Recent PR in c-lightning by Christian Decker introduced a new way for extending lightning node functionality which may be very beneficial for Layer 3 solutions. More details: ElementsProject/lightning#3315
Issue tracking LNPBP-1 work progress and discussions
This work proposes concept of fast and reliable micropayments protocol
("Lightspeed micropayments for Lightning Network") which allows millions of
transactions per channel without any additional network traffic and any
per-transaction signature generation. This requires generalization of the
Lightning network (such that parties may be able to add additional outputs to
the commitment transactions) and RGB.
One of the main design goals for the Lightning Network was an idea of
micropayments: fast repeating payments for small amounts (sometimes <1 satoshi)
between two parties.
However, the current Lightning Network design prevents effective micropayments
due to the number of factors, which we will examine on a model setup where
Party 1 (Client) has to pay to Party 2 (Server) for each API call.
The setup of the configuration will require two Lightning Nodes; we will examine
the simples case when there exists a direct channel between them:
+------------+ +------------+
| Client app | <----------------------------> | Server app |
+------------+ +------------+
| |
+-------------+ +-------------+
| Client's LN | <--------------------------> | Server's LN |
+-------------+ +-------------+
Fig. 1. Setup of micropayments on arbitrary server API calls.
Possible payment workflow is the following:
update_add_htlc
, commitment_signed
andrevoke_and_ack
So, the procedure takes at least:
2 * (pending_htlc_count * 2 + 2 + 1)
-As a result, each API call gets a significant delay required to perform all
network and interprocess communications and signatures. In fact, this renders
current Lightning Network implementation practically unusable for high-frequency
micropayments: they will require at least an order of magnitude more time to
perform than an original API call without an attached payment.
Generalized Lightning Network protocol (gLNP) is a
concept for establishing and managing arbitrary
channels (bi-directional, multiparty) with arbitrary extensions to transaction
structure that are negotiated during the channel setup.
With gLNP we may extend commitment transaction by adding a new "funding" output
to which the Client will allocate some large multiple of per-API call payments
aside of the main channel funds. This allocation (Lightspeed funding, or
LSF) will be controlled by script in the same manned as the initial channel
funding transaction.
Additionally to LSF, parties agree and sign another transaction spending LSF
output, named micropayment transaction. This transaction contains multiple
equal outputs, one output per payment for a single API call. Each of these
outputs are controlled by a script locked to either Client-generated hash
pre-image or Client's time-locked private key. It is important to note, that
hash-spending script branch does not require signature by a private key and
require only hash preimage, at this stage known only to the Client; at the same
time each of the outputs MUST have different pre-image.
Funding output: Commitment tx:
+-----------------+ +--------------+
| 2-of-2 multisig | <--- | to_local |
+-----------------+ +--------------+
| to_remote | Micropayment tx:
+--------------+ +--------------------------+
| **LSF** | <--- | Hash1-or-pubkey_timelock |
+--------------+ +--------------------------+
| HTLC's | | Hash2-or-pubkey_timelock |
+--------------+ +--------------------------+
| Hash3-or-pubkey_timelock |
+--------------------------+
| Hash4-or-pubkey_timelock |
+--------------------------+
| ... |
+--------------------------+
Fig. 2. Version 1 of Lightspeed-enabled channel structure (based on pure gLNP).
With this setup Client app will have a list of N
hash preimages, and Server
can be aware of the list of channel ids and corresponding hashes associated with
each client.
For each API call i < N
Client adds information on i
-th preimage to the call
parameters (or metadata, like a custom HTTP header). Server verifies that the
preimage corresponds to a known hash and if does provides Client with the
requested data.
This reduces the computational and network load for each API call:
I.e. the proposed scheme reduces all additional network and signature
generation load for Lightning micropayments, enabling them in a high-performant
manner.
With all advantages the scheme still has some significant drawbacks:
These problems may be solved if the payments will shift from transaction-output
satoshi-based model into a sealed state RGB token-based model.
Let's assume that the Server app owner issues some "service tokens" using RGB,
which have a fixed value (in terms of either it's services or satoshis), or
are equivalent in volume to the amount of API calls it may serve.
Client byes amount of tokens required for performing desired number of server
API calls on the market and allocates them to the Lightning channel opened with
the Server into the LSF output. (NB: you may think about this as a
redeemable "pre-paid", since unlike most of "pre-paid" services in our case
unspent tokens can always be removed from the channel and sold on the market).
Now, instead of micropayment transaction spending LSF into a thousands of
outputs, in RGB-based version the micropayment transaction will have a single
output controlled by a script releasing funds either to Server's public key
(instantly) - or to Client's public key with some timelock. At the same time,
Client constructs a client-validated RGB data structure, that defined N
seals
(and N can be millions or more) that assign a per-request amount of tokens to
the same single micropayment transaction output. It is important to know that
each of these outputs has some unknown secret value (stored in RGB metadata),
some entropy, that makes impossible to Server to guess the data for the
client-validated RGB proof without getting this secret from the client.
With each request Client provides Server with a single secret value, so
the Server will be able to generate corresponding RGB proof and use a part of
the tokens sealed to micropayment transaction output.
Funding output: Commitment tx:
+-----------------+ +--------------+
| 2-of-2 multisig | <--- | to_local |
+-----------------+ +--------------+
| to_remote | Micropayment tx:
+--------------+ +--------------------------+
| **LSF** | <--- | 2-of-2+timelock multisig | <-+
+--------------+ +--------------------------+ |
| HTLC's | Single-use
+--------------+ seals
Client-validated RGB data: |
+--------------------------+ |
| Secret1-locked tokens | --+
+--------------------------+ |
| Secret2-locked tokens | --+
+--------------------------+ |
| Secret3-locked tokens | --+
+--------------------------+ |
| Secret4-locked tokens | --+
+--------------------------+ |
| ... | --+
+--------------------------+
Fig. 3. Version 2 of Lightspeed-enabled channel structure (based on gLNP+RGB).
The proposed final schema utilizing both gLNP and RGB-based tokens allows fast
and reliable micropayments ("Lightspeed") without counterparty risk within the
payment protocol. It still does have a counterparty risk of central token issuer
withholding full pre-paid amount, however this risk can be substantially reduced
in cases when the utility tokens used for API calls are free-tradeble on the
market and/or have a fixed value.
For non-cooperative channel closings the Server has ability to leave the Client
without any of the remaining tokens, however even in this case the Server will
not have a direct economical benefit: Client's tokens which were not already
paid to the server will be simply lost forever; at the same time Server owner
will risk it's reputation, so this kind of attack can be used for exit-scam only
and do not provide additional risk: exit-scam can be done in a more simple way
by Server seizuring all pre-payment funds and stopping from providing the
API services, which leaves us with the case of non-protocol counterparty risk
already described in the previous paragraph.
Interestingly the non-cooperative case of Client's token loss can be also
technically mitigated with more complex designs providing two symmetric outputs
for the micropayment transaction; however we believe that while it does increase
complexity of the scheme it does not in fact adds to safety due to the arguments
given above.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Discussing https://github.com/LNP-BP/lnpbps/blob/master/lnpbp-0003.md; please post your comments below
The fifth step of LNPBP-0004 says that
For each of the slots that remain empty (the slot number is represented by j):
- tweak public key R with it's own hash H(R) j times: Rj = R + J * H(R) * G)
- compute a 256-bit bitcoin hash of Rj and serialize it into the slot j using bitcoin-style hash serialization format.
Where R
is entropy * G
.
I think we could replace that with a simple SHA256(entropy || j)
With 4 invoice formats from LNP-BP/devcalls#25 + compatibility/interoperability with
Pay special attention to design criteria and points in LNP-BP/devcalls#25 (comment)
Witness transaction - a txn that contains the commitment; is a part of single use seal witness.
RGB uses bitcoin transactions in 2 ways:
1st - to put commitments to what happens with RGB off-chain (commit to each transfer which is also called 'state transition') into the bitcoin transaction.
2nd - to use UTXO to allocate some state to, in particular to allocate some assets to this output. And when this output is spent, it means that Alice actually did a transfer of that asset and the txn that spends this output, must contain the commitment (meaning it should be a witness txn).
Basically Alice is using 2 bitcoin transactions: one she needs to assign asset to, other - the witness transaction. Key idea from the early days was to combine these 2 transactions into 1, meaning that the witness transaction may contain the output which contains the asset that Alice is transferring to Bob, meaning that Alice assigns the asset not to some other existing transaction output, but to the output that will be created by Alice.
Raised during the dev call on June 24th, 2020
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.