Giter Club home page Giter Club logo

brcs's Issues

[DISCUSSION] - "Standard"

BRC ID

0

Discussion

The use of the word "standard" was discussed with the team this week and I'd like to propose that we rephrase some of the overview / intro readme to ensure this is not confusing to others.

The intention was to propose ideas, which can later become standards. Defining them as standards within their own context is fine, as they aspire to be. However, they are not yet standard in any meaningful way (with some obvious exceptions).

The language may put some off from putting forward their own novel ideas. I think we'd be better encourage people to come forward and share their thoughts on how things ought to be done if we were to rephrase to something more like "technical proposals" or "initial thoughts" or "Bitcoin Requests for Comment" I suppose would be most appropriate.

Is IPv6 multicast adoption by Internet exchanges and ISP's likely for Bitcoin?

BRC ID

BRC 80

Discussion

This is a commercial challenge, not a technical one.

The Internet has never turned on Multicast for interdomain traffic, because there was never a viable business case for it. Many times it has been requested over the past 30 years. But the case was never convincing. Until, perhaps, now.

So how will they need convincing this time:

  1. is the biz case big enough ?
  2. will Bitcoin actually get adopted at scale?
  3. a new IETF RFC standard or update looks necessary for multicast protocol adaptations. Who will pay for this?

Perhaps by starting out unicast transit as it is today because the host and network load is still relatively low, and ramping up over time will create the demand to activate the resources needed to do the convincing.

Isn't this leaving things rather up in the air though? And to leave this question virtually unattended while forging ahead with Teranode in a vacuum, risks having to roll back a hell of a lot of costly hard work at a later stage, due to unintended consequences.

See here for more detail: https://bit.ly/3KjdiV7

Purpose of the standard and the EF Marker?

BRC ID

30

Discussion

Why include the EF Marker?

The fact that a library that does not support EF can not read an EF transaction using this marker is not a valid reason. A well-written library would raise an error on attempted to decode an EF even if it did not have this marker.

In existing Bitcoin there are no type markers in the serialized form of bitcoin objects. Instead an INV Type is used when requesting data and a separate "command" is used when sending data between peers. It would be more in keeping with the existing design to extend the INV Type with a new type and use a separate P2P command for Extended Transactions.

[DISCUSSION] - BIP 239

BRC ID

30

Discussion

I have just 2 points:

  1. Knowing the previous amount and script is not actually sufficient to validate. You also need to know if the UTXO is from before the genesis activation as the rules are different. Since, as time passes, the number of such UTXOs will only decline, and also since I suspect very few of those UTXOs are actually affected by the validation differences, I suggest that we ignore this. Perhaps a tool that wants to validate should test the post-genesis assumption first, and the pre-genesis assumption second.
  2. As written BIP239 does not address coinbase transactions. I think for completeness, and to keep implementations behaving consistently, this should be addressed.

Also, one or two example extended format transactions should be provided in hex for testing purposes.

💡 [IDEA] - Unified, Unambiguous Transport Independent TPN Broadcast Protocol

Summary

We need to align the ecosystem onto a single, unified, transport-independent protocol for transaction broadcast to the Transaction Processing Network, including proof acquisition, double-spend handling, and clear procedures [such as retrying or dropping] for all parties in case of all possible errors.

Currently, the ARC API is the closest we have. But having ANY ambiguity whatsoever when engaging with the Transaction Processing Network is fundamentally problematic.

Example

Sending transactions to the Transaction Processing Network is currently an ambiguous process, with both undefined failure modes and undefined partial and full success cases. This alignment would mitigate these issues.

Other information

No response

Relevance to BSV

  • This proposal is relevant to the Bitcoin SV network

[DISCUSSION] - BUMP format JSON format consistency and easier for statically typed languages

BRC ID

74

Discussion

In short, update the BUMP format to include a path as an array of arrays rather than an array of object. Use a key "offset" rather than using the value itself as a key.

Update this:

path: [
{
   "20": { hash: '0dc75b4efeeddb95d8ee98ded75d781fcf95d35f9d88f7f1ce54a77a0c7c50fe' },
   "21": { txid: true, hash: '3ecead27a44d013ad1aae40038acbb1883ac9242406808bb4667c15b4f164eac' }
},
 //  ...etc.
]

To this:

path: [
    [
      {
        offset: 20,
        hash: '0dc75b4efeeddb95d8ee98ded75d781fcf95d35f9d88f7f1ce54a77a0c7c50fe'
      },
      {
        offset: 21,
        txid: true,
        hash: '3ecead27a44d013ad1aae40038acbb1883ac9242406808bb4667c15b4f164eac'
      }
    ]

Thanks to @shruggr for the feedback and suggestion.

Motivation fleshed out:
Using values as keys in JSON can make serialization and deserialization difficult in statically typed programming languages because these languages rely on compile-time type checking. Statically typed languages require that the data types of variables, including keys in a JSON object, be known at compile-time.

When values are used as keys in a JSON object, the key's value can be of any type, which makes it challenging for statically typed languages to determine the appropriate data type of the key during compilation. This ambiguity can result in errors during serialization or deserialization processes.

For example, if a statically typed language expects a specific data type for a key but encounters a value that doesn't match that expected type during deserialization, it may throw an error or fail to properly deserialize the JSON object. Similarly, during serialization, a statically typed language may have trouble determining the type of a key's value, leading to issues in generating the correct JSON output.

To overcome this difficulty, statically typed programming languages typically require keys to be of a specific known data type, such as a string or a numeric value. This ensures that the type of the key can be properly inferred and validated during compilation, making serialization and deserialization more straightforward and reliable.

[DISCUSSION] - Comments on BUMP

BRC ID

74

Discussion

Thanks for creating ths spec; I found the TSC spec quite dissatisfactory and it seems others did too.

My understanding is that the BUMP format is intended for use when SPV wallets share merkle proofs with each other during transaction negotiation, and also for when SPV wallets receive proofs from miners or third party services.  It is also likely a useful format for a wallet to save proofs in its own database for later use.  My comments below are with this is mind. 

After digesting the spec, my first impression was that there should be no need for the flags parameter if approached a little differently.  Further, the spec does not cover ambiguities, such as if an offset appears twice in a level, and an unknown flag.  The included sample code, and reference implementation at https://github.com/libsv/go-bc, does not appear to handle some problematic scenarios.  I haven't learnt the go language so please be kind if I have misunderstood the code!

First, the flags parameter.  I cannot understand the distinction between client and non-client txids.  A valid BUMP should be considered a proof for all transaction hashes in the first level.  The hashes in other levels are obviously not transaction hashes.  So why distinguish?  Secondly, if the block transaction count is provided (similarly to the height), then it is always known if a given hash should be duplicated or not, removing the need for that flag too.  Specifying the tx count makes the "tree height" entry redundant, so perhaps it should replace it?  That was my initial idea.  I believe in all cases this would make a more compact format (certainly in JSON, and almost certainly in binary too).  The block transaction count can be useful metadata for a wallet too.

Next, as BUMPs are being provided to wallets from untrusted third parties, an implementation should be able to detect malicious or erroneous data; indeed the entire BUMP data structure should be verifiable.  Implementations are helped if the spec indicates how ambiguities and corner cases should be handled (I give a list of these and my suggested handling below).  The sample code does not appear to handle leaves in the BUMP that are unnecessary for the proof (and therefore are unchecked) or the same offset being present more than once with different hashes. 

The spec discusses a merge operation; this is useful for a wallet merging BUMPs for different groups of transactions in the same block. Consider the combinePaths function shown in the spec, merging two bumps.  Suppose at least one of them is a malicious BUMP, that proves what it was supposed to prove, but has ambiguous or extraneous leaves included, so its malicious nature is not initially apparent.  I believe that merging the BUMPs as shown, because of the problem of ambiguous and/or conflicting leaves overriding each other, is likely to result in a BUMP that no longer correctly proves the full set of transaction hashes that A and B originally correctly proved separately.  Further, resolving / detecting this during the merge operation is non-trivial.

A final thing to worry about is that of "phantom" merkle branches, possible because of Satoshi's unfortunate decision to duplicate the last hash when there is an odd number in a level.  For example, a block with 6 transactions, and a wallet asking for proof of inclusion of the last transaction, can be fooled by someone creating a BUMP for an 8-transaction block with the same first 6 transacactions, and tx 7 a duplicate of tx 5 and tx 8 a duplicate of tx 6.  This fake BUMP would give the correct merkle root and therefore I believe be accepted by the sample code.  An implementation should be able to detect if it is being lied to like this.  The TSC spec discusses this point and how to detect it.

Here are scenarios I think a wallet should detect, and my suggested handling:

  1. Offsets out of range in a level.  REJECT
  2. An offset appearing more than once in a level.  REJECT if the hashes differ, ACCEPT redundant duplicates but remove them.
  3. Presence of a phantom merkle branch. REJECT
  4. Leaves in the path that do not contribute to proving any of the tx hashes in the base level. REJECT the bump, unless the leaves are redundant in that they match the combined hash of the two hashes in the level below, in which case ACCEPT them but remove them from the bump.
  5. BUMP path of wrong depth: REJECT

In summary, I think it is best, and arguably necessary, for an implementation to fully validate the entirety of the data in the BUMP.  I do not know, but I doubt, if that is possible with the current spec without a transaction count.  If the transaction count is included, then it is possible to fairly easily verify everything in the BUMP and implement the handling I describe above.

In passing, as the 2nd point is sometimes missed, note a bump only proves inclusion of a tx in a block if

  1. the merkle root of the proof matches the merkle root of a block header (at the claimed height if it helps lookup)
  2. the tx is a valid bitcoin tx (so the hash isn't arbitrary), which proves the tree is not truncated

That leaves a final problem: how to prove the transaction count?  The transaction count is proven (if the phantom branch check is implemented) by including the final transaction in the block in the proof of every BUMP.  This renders the explicit tx count unnecessary in the format, as it is one plus the largest offset in the first row of the path, saving a few more bytes.  However it adds more hashes to the BUMP if the final transaction is not one that was originally wanted (but likely only for a few levels).  Considering the elimination of the flags, I suspect a BUMP in this format is not much different on average to a BUMP in the spec format (binary and JSON), and will compare increasingly favourably as the number of transactions in the BUMP rises.

To summarise, this is my suggested format - not too different to the current one:

  1. block height (as now), followed by
  2. the path: a list of levels.  Each level: a varint count, followed by (offset, hash) pairs.

with the understanding that the last TX in the block is always included.  Then the tx count is determined to be 1 plus the maximum offset in the first level of the path, which in turn determines the length of the path.

This suggestion has the benefit of being a simpler format, therefore likely a simpler implementation, and being robust to various attacks.  It's main downside is it's not the existing spec, so not implemented in ref wallet, Arc, or typescript library 😃

I have written a fully-tested implementation in Python in the bitcoinX repo; the entire code is quite tight and about 270 lines.  It includes all the verifications and checks I mention above, and can read / write to / from binary and JSON.  It also includes code to create a BUMP given the tx hashes of a block and a desired subset.  The merge operation is also implemented.  See https://github.com/kyuupichan/bitcoinX/blob/master/bitcoinx/merkle.py

I will implement the spec as-is so I can compare sizes properly, but Daniel was pressuring me to comment more quickly :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.