Giter Club home page Giter Club logo

onyx's Introduction

Onyx Core

Onyx Core is software designed to operate and connect to highly scalable permissioned blockchain networks conforming to the Onyx Protocol. Each network maintains a cryptographically-secured transaction log, known as a blockchain, which allows participants to define, issue, and transfer digital assets on a multi-asset shared ledger. Digital assets share a common, interoperable format and can represent any units of value that are guaranteed by a trusted issuer — such as currencies, bonds, securities, IOUs, or loyalty points. Each Onyx Core holds a copy of the ledger and independently validates each update, or “block,” while a federation of block signers ensures global consistency of the ledger.

Onyx Core Developer Edition is a free, downloadable version of Onyx Core that is open source and licensed under the AGPL. Individuals and organizations use Onyx Core Developer Edition to learn, experiment, and build prototypes.

Onyx Core Developer Edition can be run locally on Mac, Windows, or Linux to create a new blockchain network, connect to an existing blockchain network, or connect to the public Onyx testnet, operated by Onyx, Microsoft, and Cornell University’s IC3.

For more information about how to use Onyx Core Developer Edition, see the docs: https://Onyx.org/docs

Download

To install Onyx Core Developer Edition on Mac, Windows, or Linux, please visit our downloads page.

Contributing

Onyx has adopted the code of conduct defined by the Contributor Covenant. It can be read in full here. This repository is the canonical source for Onyx Core Developer Edition. Consequently, Onyx engineers actively maintain this repository. If you are interested in contributing to this code base, please read our issue and pull request templates first.

Building from source

Environment

Set the ONYX environment variable, in .profile in your home directory, to point to the root of the Onyx source code repo:

export ONYX=$(go env GOPATH)/src/Onyx

You should also add $ONYX/bin to your path (as well as $(go env GOPATH)/bin, if it isn’t already):

PATH=$(go env GOPATH)/bin:$ONYX/bin:$PATH

You might want to open a new terminal window to pick up the change.

Installation

Clone this repository to $ONYX:

$ git clone https://github.com/Onyx-Protocol/Onyx $ONYX
$ cd $ONYX

You can build Onyx Core using the build-cored-release script. The build product allows connections over HTTP, unauthenticated requests from localhost, and the ability to reset the Onyx Core.

build-cored-release accepts a accepts a Git ref (branch, tag, or commit SHA) from the Onyx repository and an output directory:

$ ./bin/build-cored-release Onyx-core-server-1.2.0 .

This will create two binaries in the current directory:

  • cored: the Onyx Core daemon and API server
  • corectl: control functions for a Onyx Core

Set up the database:

$ createdb core

Start Onyx Core:

$ ./cored

Access the dashboard:

$ open http://localhost:1999/

Run tests:

$ go test $(go list ./... | grep -v vendor)

Building from source

There are four build tags that change the behavior of the resulting binary:

  • reset: allows the core database to be reset through the api
  • localhost_auth: allows unauthenticated requests on the loopback device (localhost)
  • no_mockhsm: disables the MockHSM provided for development
  • http_ok: allows plain HTTP requests
  • init_cluster: automatically creates a single process cluster

The default build process creates a binary with three build tags enabled for a friendlier experience. To build from source with build tags, use the following command:

NOTE: when building from source, make sure to check out a specific tag to build. The main branch is not considered stable, and may contain in progress features or an inconsistent experience.

$ go build -tags 'http_ok localhost_auth init_cluster' Onyx/cmd/cored
$ go build Onyx/cmd/corectl

Developing Onyx Core

Updating the schema with migrations

$ go run cmd/dumpschema/main.go

Dependencies

To add or update a Go dependency at import path x, do the following:

Copy the code from the package's directory to $ONYX/vendor/x. For example, to vendor the package github.com/kr/pretty, run

$ mkdir -p $ONYX/vendor/github.com/kr
$ rm -r $ONYX/vendor/github.com/kr/pretty
$ cp -r $(go list -f {{.Dir}} github.com/kr/pretty) $ONYX/vendor/github.com/kr/pretty
$ rm -rf $ONYX/vendor/github.com/kr/pretty/.git

(Note: don’t put a trailing slash (/) on these paths. It can change the behavior of cp and put the files in the wrong place.)

In your commit message, include the commit hash of the upstream repo for the dependency. (You can find this with git rev-parse HEAD in the upstream repo.) Also, make sure the upstream working tree is clean. (Check with git status.)

License

Onyx Core Developer Edition is licensed under the terms of the GNU Affero General Public License Version 3 (AGPL).

The Onyx Java SDK (/sdk/java) is licensed under the terms of the Apache License Version 2.0.

onyx's People

Contributors

ameets avatar bobg avatar boymanjor avatar chain-team avatar chrisgarvin avatar christopher-gibson avatar croaky avatar danrobinson avatar dependabot[bot] avatar dominic avatar donce avatar educkf avatar erykwalder avatar jbowens avatar jeffomatic avatar kr avatar marjoriekasten avatar oleganza avatar ryandotsmith avatar santosh79 avatar tarcieri avatar tessr avatar tonychain avatar vickiniu avatar zaryafaraj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onyx's Issues

Commitment to UTXO state should not be updated every block?

We probably should allow block signers to commit UTXO state every some number of blocks. This should give some flexibility in terms of optimizing recalculation process. The nodes would need to keep the diff set of UTXO modifications in-between these commitments.

@kr, I wonder if the patricia trie/set recomputation requires some optimization like that?

Transaction input API response has superfluous control_program field

Currently, the API returns inputs in a transaction with a control_program field. This has never been a documented property of the API, although it appears in response objects returned by the Node JS SDK (which performs no filtering on the client side), and by the Ruby SDK (a historical mistake; see #542).

Internally, the control_program annotation appears to be important for adding account information from the spent output into the spending input's annotations (see issue 1754 in our internal archival repo), but it's also not something we need or necessarily want to reveal in an API response itself.

It would be simple to just leave this in the API, but I would prefer to either 1) document it, or 2) even better, figure out a way to ensure that it's not included in API responses.

Inconsistent errors with large amounts

Trying to build an action with amount

99999999999999999999

results in the following error:

Action 1 amount must be an integer.

However, attempting to build an action with amount

10000000000000000000

returns an action-specific error of:

amount 10000000000000000000 exceeds maximum value 2^63

sdk/java: add details to SubmitResponse class

In the web console each transaction has several detail information (ID, TIMESTAMP, BLOCK ID, BLOCK HEIGHT, POSITION, LOCAL, REFERENCE DATA), it would be nice to have some of this details back in the com.chain.api.Transaction.SubmitResponse object.

core/account: data race in test

Pretty sure that this is just a race specific to the TestCancelReservation test. We alter the protocol.Chain's state tree as test setup which violates the protocol.Chain's contract.

==================
WARNING: DATA RACE
Write at 0x00c4201421d0 by goroutine 35:
  chain/protocol/patricia.(*Tree).Insert()
      /Users/jackson/src/chain/protocol/patricia/patricia.go:133 +0x3e4
  chain/core/account.TestCancelReservation()
      /Users/jackson/src/chain/core/account/reserve_test.go:52 +0x694
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:657 +0x107

Previous read at 0x00c4201421d0 by goroutine 36:
  chain/protocol/state.Copy()
      /Users/jackson/src/chain/protocol/state/snapshot.go:38 +0x9d
  chain/protocol/memstore.(*MemStore).SaveSnapshot()
      /Users/jackson/src/chain/protocol/memstore/memstore.go:52 +0x8a
  chain/protocol.NewChain.func2()
      /Users/jackson/src/chain/protocol/protocol.go:142 +0x195

Goroutine 35 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:697 +0x543
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:881 +0xaa
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:657 +0x107
  testing.runTests()
      /usr/local/go/src/testing/testing.go:887 +0x4e0
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:822 +0x1c3
  main.main()
      chain/core/account/_test/_testmain.go:70 +0x20f

Goroutine 36 (running) created at:
  chain/protocol.NewChain()
      /Users/jackson/src/chain/protocol/protocol.go:148 +0x51b
  chain/protocol/prottest.NewChainWithStorage()
      /Users/jackson/src/chain/protocol/prottest/block.go:37 +0x14b
  chain/protocol/prottest.NewChain()
      /Users/jackson/src/chain/protocol/prottest/block.go:25 +0x10e
  chain/core/account.TestCancelReservation()
      /Users/jackson/src/chain/core/account/reserve_test.go:26 +0x7e
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:657 +0x107
==================
--- FAIL: TestCancelReservation (0.56s)
	testing.go:610: race detected during execution of test
FAIL

proposal: block processors

[Note: this document was originally written a few months ago, and it's slightly out of date in a few places where it talks about "current" behavior.]

Currently the protocol.Chain calls a set of callbacks whenever a block is applied to the chain. Let’s call the block callbacks processors. Currently, committing a block and running the processor happens in lockstep. The next block will be validated once all the processors for the previous block have finished.

There are some problems with the current situation.

Problem 1: Synchronous

It’s possible for a processor to encounter an error. Maybe the database’s disk has filled up, or we have a syntax error in one of the SQL statements. These sorts of errors are different from validation errors. If a block fails validation, we need to reject it and wait for a valid block. The system halts entirely. But once the block is validated, we are obligated to process it. If a processor fails for any reason, it’s our fault, and we need to alert a human and fix the error, so we can resume processing.

Currently, we handle processor failure (when we handle it at all) by crashing the process, meaning we stop everything, even validating new blocks and serving non-blockchain requests such as “list account managers”. This is bad for availability.

Solution 1a: Async

While a processor is failing, we can certainly continue to validate new blocks, as long as we keep track of which ones haven’t yet been processed. The system as a whole is already asynchronous, so this won’t change our basic assumptions. It will allow us to provide partial availability (validating blocks, serving requests) even while processors are failing.

This means we have, notionally, two pointers: one pointing to the last block that was committed and another one pointing to the last block that was processed. The processor pointer necessarily trails the commit pointer. They can be equal, but processors always run after a block is committed. (For crash recovery, these pointers also need to be put somewhere persistent, but for our purposes here, that is an implementation detail.)

The processor pointer also serves a second function: it “pins” all subsequent blocks to persistent storage. Processing is the last step in the pipeline for ingesting each block. Once processing has finished, that block can be released—it’s okay to prune.

Solution 1b: Concurrent

We can take the idea of partial availability further. We’ve decoupled failure of validation from failure of the processors, and we can also decouple failure of one processor from another. Giving each processor a name and its own pointer lets us continue running most of them even when one has failed. This implies they will run concurrently.

With multiple processor pointers, it’s safe to prune the min.

Section 1c: Distributed Concurrency

The core leader is responsible for launching processors but shouldn't have sole responsibility for executing them. If other servers in the same core have cycles to spare, the leader should be able to farm out the execution of a processor to another server.

Problem 2: Transitions

Imagine a Chain Core with all optional features turned off: no account manager, no issuer, no explorer, no long-term block archiving. All it does is ingest blocks and validate them. It’s been running this way for months. Now let’s say the operator wants to turn on all services and optional features. In theory, is this easy? Is it even possible?

For some services, no problem. Creating a new account manager is easy. Generate an xpriv and begin indexing UTXOs for known public key hashes. By construction, the set of UTXOs held by the new account manager is empty at the time it’s created.

For other services, impossible. Indexing historical UTXOs requires knowing the history. In this example, the history has been thrown away.

We don’t have a coherent way to handle these transitions, or to express which ones are safe and which aren’t.

Solution 2a: DB Config

The Chain Core shouldn’t use environment variables to control which features and services are enabled. Instead, it should store the configuration in database tables and mediate access with logic that can prevent unsafe configurations and make safe transitions more convenient.

If block archiving is enabled, then essentially any service can safely be enabled (even though it might take a long time to catch up). If block archiving is disabled, it can in principle be enabled if at least one other node on the network has a complete archive. Enabling block archiving will cause the system to begin fetching all blocks from other nodes, and then other features can be enabled.

There are even some features that don’t need to start from the very beginning, but do need at least a complete copy of the current state. Historical UTXOs is one such feature. We can support this style of transition using the same principles.

Solution 2b: Pull (not Push)

Finally, considering all these things, I think callbacks are inside out. It might be better instead to have processors request blocks from the protocol.Chain and explicitly update pointers with GetBlock(height), and Pin(pointerName, height) and Release(pointerName, height) calls.

Then, we have a somewhat more straightforward path to enable a feature F that requires indexing the entire blockchain. The first step is to pin block 1 (which guarantees that all subsequent blocks are stored locally). If this operation fails, it’s not safe to enable F, and block archiving needs to be turned on first. Once that’s done, F can proceed at its own pace, fetching blocks and processing them until it’s all caught up. (It’s even safe to turn block archiving off again, any time after F has been enabled, because it’s pinned the necessary blocks. In that case, the blocks will be automatically discarded as they’re no longer needed.)

A similar process applies if the feature can start from a more recent state snapshot.

Need process for dealing with SDK->API breaking changes

We want to commit changes to the monorepo such that all affected packages can be updated simultaneously. Typically, this means updates to the API are accompanied by corresponding changes in the SDK. However, with commits such as these, it may be impossible to maintain backward compatibility in both of the following situations:

  • pre-update SDK -> post-update cored
  • post-update SDK -> pre-update cored

This is best explained by example: #306 lays the groundwork for an explicit "retire" action, which is anticipated by various experimental features (entry-based tx structures, etc.). It adds an explicit "retire" action to the API and updates the SDKs accordingly. The change to the API is backward compatible with respect to older versions of the SDK. The change to the SDK is not, however, backward compatible with respect to older versions of the API; it requires the corresponding update to the API.

Unfortunately, the monorepo doesn't live in a vacuum: we have a versioning policy (nascent, but a policy nonetheless) stating that version x.y.foo of cored should be compatible with version x.y.bar of the SDK. If we adhere to this policy, #306 would motivate an uptick of the minor version. But minor version updates are a grey area: on more than one occasion, we have attached extra political and marketing baggage to the prospect of a minor version update, with the implication that minor version updates cannot be performed casually.

Given the above, minor engineering improvements that happen to break backward compatibility between packages (in some direction, if not bi-directionally) now produce a conundrum: new releases are blocked until we deem the main branch worthy of a new minor version. If build version updates are always made on recent commits on the main branch, that means we are also blocked on releasing build version updates (i.e., those that have no backward-compatibility issues whatsoever), simply because they arrived on main after a breaking change.

Some initial ideas that could break the logjam here:

  • Don't commit changes to different packages simultaneously if total backward-compatibility is broken, AND we are not ready for a minor version uptick. This moves away from the monorepo philosophy of updating everything at once, and requires us to have some discipline in remembering to merge deferred updates once a minor version uptick rolls around.
  • Relax the version scheme requirement of full-backward compatibility between packages at identical minor versions. For example, version x.y.foo of cored must work with any version x.y.bar of the SDK, or vice versa, but not both. This puts some cognitive load on external developers; we can't reasonably demand that devs remember our version conventions, so we'd have to be super explicit about version requirements in release notes/changelogs. For example, any arbitrary release of the SDK may contain the note "requires version x.y.foo of cored`. There is a burden on external devs to pay very close attention to release notes.
  • Use branches for individual package releases, and backport small updates from main if it is not feasible to uptick the minor version and the updates occur after a change that breaks backward compatibility for the package in question. This requires some discipline for the branch maintainers to be very aware of the current state of each branch, and to be aware of which features from main ought to be included, and which should be deferred.
  • Let's just be casual about minor version upticks, and do it whenever we want. This puts the burden on partner leads (or whomever) to communicate this effectively to external devs. There is at least some precedent for external dev teams, in particular of the large enterprise ilk, being nervous (with good reason or not) about minor version instability.

At first glance, I can't say I'm in love with any of the above, so I'm hoping someone with more imagination might have some insight. Thoughts welcome!

How to add signers in a running chain ?

How to add signers in a running chain and update generator's quorum property ?

corectl config and config-generator cmd just can init the chain,is there any tools can do this?

core/leader: lead func data race

There's a race condition when a cored is leader, gets demoted and becomes leader again. leader.Run will spawn several goroutines when cored becomes leader. These goroutines are expected to exit whenever the context is cancelled. If for whatever reason they don't exit immediately (maybe they're stuck in a slow DB call), it's possible for the old leadership and the new leadership goroutines to be running concurrently.

For most of the leadership goroutines, that's okay, but some rely on the fact that only one leadership goroutine should be running when they access data structures. For ex, core/generator.Generator.Generate will read/write the latestBlock and latestSnapshot fields without using a mutex. This is okay as long as there's only one Generator goroutine running at a time.

We can either:

  1. fix each leadership goroutine to use mutexes instead of relying on it being the only goroutine of its kind
  2. when demoted, use a sync.WaitGroup and wait for all of the leadership goroutines to recognize that they've been cancelled and exit before moving on and trying to become leader again

/create-transaction-feed returns 500 error on null client_token param

Sending a null or undefined client_token param to the /create-transaction-feed endpoint will yield the following:

app=cored ... at=http.go:51 t=2016-11-02T23:07:29.398290512Z
status=500 chaincode=CH000 path=/create-transaction-feed
error="pq: null value in column \"client_token\" violates not-null constraint"

This is the client's problem, not the server's, so CH000 isn't quite right here. The endpoint should either tolerate a blank idempotency token (recommended), or return a 400-class error.

[Java SDK] Upload SDK to Maven Central

Please upload the Java SDK to Maven Central. It would make it much easier to integrate the SDK in a project. At the moment it is possible to run the application locally by using a dependency with scope system. However, this does not include the jar in the build and it is cumbersome to setup a local repository just for the chain SDK.

Signing optimization: group inputs by root xpub

Debugging an issue earlier, we encountered a transaction with very many inputs - "dust," or numerous small utxos, that had accumulated in an account, being assembled into one big spend. Each of those inputs required a signature from the same root xprv, but with different derivation paths. Rather than a separate roundtrip to the HSM for each input, we could batch HSM requests on a per-xprv basis.

Unnecessary critical dependency on stability of JSON encoding function

I noticed there's a consensus-critical dependency on stability of JSON encoding function serializeAssetDef: https://github.com/chain/chain/blob/main/core/asset/asset.go#L325-L333

The DB stores definition as a map and when one issues, we rely on this function to always produce the same JSON encoding that matches the original one when the asset ID was defined for the first time. Till the end of times.

I don't think it's the source of the bug that Chris experienced (see #382), but it will bite us in the future when Go 2.3 will introduce a slightly different JSON encoding and half the network produces different asset id.

We should store the binary string representing asset def and decode from JSON when needed, not the other way around.

1.0 to 1.1 user-facing updates

This is a preliminary roadmap of external feature updates for the upcoming 1.1 release.

  • Compatibility
  • Receivers
  • Output ID
  • Network version change

Compatibility

Chain Core 1.1.x is compatible with client applications written for Chain Core 1.0.x, but you should review the changes in this document and make sure to migrate away from deprecated features, which are likely to be removed in 1.2.x.

Due to changes in the network-level protocol, old blockchain data created on Chain Core 1.0.x is not compatible with Chain Core 1.1.x. When upgrading to 1.1.x, you'll need to re-create your assets and accounts.

Receivers

Chain 1.1.x introduces the concept of a receiver, a cross-core payment primitive that supersedes the Chain 1.0.x pattern of creating and paying to control programs. Control programs still exist in the Chain protocol, but are no longer used directly to facilitate cross-core payments.

A receiver wraps a control program with other pieces of payment-related metadata, such as expiration dates. Receivers provide the basis for future payment features, such as the transfer of blinding factors for encrypted outputs, as well as off-chain proof of payment via X.509 certificates or some other crypto-based authentication scheme.

Initially, receivers consist of a control program and an expiration date. Transactions that pay to a receiver after the expiration date may not be tracked by Chain Core, and application logic should regard such payments as invalid. As long as both the payer and payee to do not tamper with receiver objects, the Chain Core API will ensure that transactions that pay to expired receivers will fail to validate.

Creating receivers

The create-receiver API call supersedes and deprecates the create-control-program API call.

Deprecated (1.0.x)
// Java
ControlProgram controlProgram = new ControlProgram.Builder()
  .controlWithAccountByAlias("alice")
  .create(client);

// Node
controlProgramPromise = client.accounts.createControlProgram({
  accountAlias: 'alice'
})

# Ruby
cp = client.accounts.create_control_program(
  account_alias: 'alice'
)

Creating control programs in this fashion is deprecated.

New (1.1.x)

You can create receivers with an expiration time, which defaults to 30 days into the future.

// Java
Receiver receiver = new Account.ReceiverBuilder()
  .setAccountAlias("alice")
  .setExpiresAt("2017-01-01T00:00:00Z")
  .create(client);

// Node
receiverPromise = client.accounts.createReceiver({
  accountAlias: 'alice',
  expiresAt: '2017-01-01T00:00:00Z'
})

# Ruby
receiver = client.accounts.create_receiver(
  account_alias: 'alice',
  expires_at: '2017-01-01T00:00:00Z'
)

Using receivers in transactions

The control-with-receiver transaction builder action supersedes and deprecates the control-with-control-program action.

Deprecated (1.0.x)
// Java
Transaction.Template template = new Transaction.Builder()
  .addAction(
    new Transaction.Action.ControlWithProgram()
      .setControlProgram(controlProgram.controlProgram)
      .setAssetAlias("gold")
      .setAmount(1)
  ).addAction(
    ...
  ).build(client);

// Node
templatePromise = client.transactions.build(builder => {
  builder.controlWithProgram({
    controlProgram: controlProgram.controlProgram,
    assetAlias: 'gold',
    amount: 1
  })
  ...
})

// Ruby
template = client.transactions.build do |builder|
  builder.control_with_program(
    control_program: control_program.control_program,
    asset_alias: 'gold',
    amount: 1
  )
  ...
end
New (1.1.x)
// Java
Transaction.Template template = new Transaction.Builder()
  .addAction(
    new Transaction.Action.ControlWithReceiver()
      .setReceiver(receiver)
      .setAssetAlias("gold")
      .setAmount(1)
  ).addAction(
    ...
  ).build(client);

// Node
templatePromise = client.transactions.build(builder => {
  builder.controlWithReceiver({
    reciever: receiver,
    assetAlias: 'gold',
    amount: 1
  })
  ...
})

// Ruby
template = client.transactions.build do |builder|
  builder.control_with_receiver(
    receiver: receiver,
    asset_alias: 'gold',
    amount: 1
  )
  ...
end

Output ID

In Chain Core 1.0.x, transaction inputs uses a compound value called spent_output (consisting of a transaction ID and position) to refer to the output consumed by a particular input. Chain 1.1.x deprecates this scheme in favor of a single value, output_id.

Updates to data structures

Transaction outputs and unspent outputs

Transaction output objects and unspent outputs now have an id property, which is unique for that output across the history of the blockchain.

// Java
Transaction tx;
UnspentOutput utxo;
System.out.println(tx.outputs.get(0).id);
System.out.println(utxo.id);

// Node
console.log(tx.outputs[0].id)
console.log(utxo.id)

// Ruby
puts tx.outputs.first.id
puts utxo.id
Transaction inputs

Transaction inputs now reference a spent output ID. The spent output property is deprecated.

// Java
Transaction tx;
System.out.println(tx.inputs.get(0).spentOutputId);

// Node
console.log(tx.inputs[0].spentOutputId)

// Ruby
puts tx.inputs.first.spent_output_id

Spending unspent outputs in transactions

The spend-account-unspent-output transaction builder action now accepts an output ID parameter. It still accepts a pair of transaction ID and position, but this usage pattern is deprecated.

Deprecated (1.0.x)
// Java
Transaction.Template template = new Transaction.Builder()
  .addAction(
    new Transaction.Action.SpendAccountUnspentOutput()
      .setTransactionId("abc123")
      .setPosition(0)
  ).addAction(
    ...
  ).build(client);

// Node
templatePromise = client.transactions.build(builder => {
  builder.spendAccountUnspentOutput({
    transactionId: 'abc123',
    position: 0
  })
  ...
})

// Ruby
template = client.transactions.build do |builder|
  builder.spend_account_unspent_output(
    transaction_id: 'abc123',
    position: 0
  )
  ...
end
New (1.1.x)
// Java
Transaction.Template template = new Transaction.Builder()
  .addAction(
    new Transaction.Action.SpendAccountUnspentOutput()
      .setOutputId("xyz789")
  ).addAction(
    ...
  ).build(client);

// Node
templatePromise = client.transactions.build(builder => {
  builder.spendAccountUnspentOutput({
    outputId: 'xyz789'
  })
  ...
})

// Ruby
template = client.transactions.build do |builder|
  builder.spend_account_unspent_output(
    output_id: 'xyz789'
  )
  ...
end

Querying previous transactions

To retrieve transactions that were partially consumed by a given transaction input, you can query against a specific output ID.

Deprecated (1.0.x)
// Java
Transaction.Items results = new Transaction.QueryBuilder()
  .setFilter("id=$1")
  .setFilterParameter(spendingTx.inputs.get(0).spentOutput.transactionId)
  .execute(client);

// Node
client.transactions.queryAll({
  filter: 'id=$1',
  filterParameters: [spendingTx.inputs[0].spentOutput.transactionId]
}, (tx, next, done, fail) => {
  ...
})

// Ruby
client.transactions.query(
  filter: 'id=$1',
  filter_parameters: [spending_tx.inputs.first.spent_output.transaction_id]
) do |tx|
  ...
end
New (1.1.x)
// Java
Transaction.Items results = new Transaction.QueryBuilder()
  .setFilter("outputs(id=$1)")
  .setFilterParameter(spendingTx.inputs.get(0).spentOutputId)
  .execute(client);

// Node
client.transactions.queryAll({
  filter: 'outputs(id=$1)',
  filterParameters: [spendingTx.inputs[0].spentOutputId]
}, (tx, next, done, fail) => {
  ...
})

// Ruby
client.transactions.query(
  filter: 'outputs(id=$1)',
  filter_parameters: [spending_tx.inputs.first.spent_output_id]
) do |tx|
  ...
end

Network version change

The network version will increment to 2 (or 3, depending on the fate of the intermediate 1.1rc release), due to various updates to low-level data structures and serialization/hashing schemes.

core: nil pointer dereference in /build-transaction

I noticed this error when failing over between two cored processes with high transaction throughput. Shortly after the secondary becomes leader, these nil pointer dereferences happen. A little while later, all build-transaction calls succeed. It doesn't always happen on a failover. It seems like it's probably a race condition surrounding some leader-only state (maybe the state tree?).

I wish we reported a stack trace of where the panic was.

app=cored buildtag=dev processID=chain-mba.local-28703-d06d338b073853927021 reqid=c9adf2bf2996e67cc980 at=http.go:52 t=2017-02-08T01:33:48.007350845Z subreqid=782d7ec96a950d4a654f status=500 chaincode=CH000 path=/build-transaction error="runtime error: invalid memory address or nil pointer dereference"
app=cored buildtag=dev processID=chain-mba.local-28703-d06d338b073853927021 reqid=f47f7464280f03e67a68 at=http.go:52 t=2017-02-08T01:33:48.014702926Z subreqid=3d069c6bd5ea0fdd782e status=500 chaincode=CH000 path=/build-transaction error="runtime error: invalid memory address or nil pointer dereference"

bin/md2html does not run to completion when node_modules is present

md2html fails upon encountering node_modules directories in the documentation source. I haven't pinned down the precise reason in the tool source code, but this would be a nice thing to clean up eventually.

To repro, you can add a valid node_modules directory (from npm install) somewhere in the documentation path; core/examples/node is a sensible spot, since the examples cannot be run without access to the Chain SDK.

Ideal behavior would be to completely ignore directories named "node_modules" (and maybe other things, a la gitignore)--its contents would be neither parsed nor copied to the destination directory.

sdk/java: review exception usage

Environment

  1. latest
  2. Java latest from git
  3. any

Most of the exception used in the java SDK are masked by ChainException, it would be a better solution to expose some of the exception "flavors" so that different situations are manageable differently.

E.g.: Transaction.Builder.build throws ChainException, but it's just a mask for: JsonParseException, BadURLException, JSONException, HTTPException, ConnectivityException, APIException.

I'd suggest to delete ChainException and to use proper exception handling.

Code to review:

  • Transaction.Build.build(Client)
  • BatchResponse.BatchResponse(Response, Gson, Type, Type)
  • Client (various methods)

Spec deviation: asset version 1 output loses suffix data during serialization/hashing

Output commitments are extensible for all asset versions. It's only tx v1 that disallows use of extra space, but tx v2 might allow that even for existing asset version. Therefore the following code looks incorrect. When we read AV1 output, we should remember the suffix data, and enforce it being empty string only if tx version == 1, but otherwise we need to keep it around in the data structure. Same for AV2 etc: need to read full data and keep it around properly.

Immutable snippet: https://github.com/chain/chain/blob/359827001fc222136648cd34158f80cbbd19c03c/protocol/bc/txoutput.go#L119-L123

Snippet on main: https://github.com/chain/chain/blob/main/protocol/bc/txoutput.go#L119-L123

protocol: blockchain.Writer interface to encapsulate serialization flags

@bobg, I'm looking at this rebase conflict where I added serialization flags and you extracted WriteExtensibleString helper. The problem is that WriteExtensibleString expects a func that takes only a writer. Adding extra params such as serialization flags feels imprudent.

This makes me think we should have blockchain.Writer interface that implements io.Writer and has serializationFlags() method so we can access them when writing txs and blocks? Not sure we need TxWriter and BlockWriter, but we can add those as "subtypes" of blockchain.Writer with tx- or block-specific functionality if needed later.

WDYT? /cc @kr

// assumes w has sticky errors
func (t *TxInput) writeTo(w io.Writer, serflags uint8) {
   blockchain.WriteVarint63(w, t.AssetVersion) // TODO(bobg): check and return error
<<<<<<< f0c469ed8fa7e0af92dec7441d42dca5e4199826
   blockchain.WriteExtensibleString(w, t.WriteInputCommitment)
=======
   buf := bufpool.Get()
   defer bufpool.Put(buf)
   t.WriteInputCommitment(buf, serflags)
   blockchain.WriteVarstr31(w, buf.Bytes())
>>>>>>> fixing up outputids and unspentids
   blockchain.WriteVarstr31(w, t.ReferenceData)
   if serflags&SerWitness != 0 {
       blockchain.WriteExtensibleString(w, t.writeInputWitness)
   }
}

<<<<<<< f0c469ed8fa7e0af92dec7441d42dca5e4199826
func (t *TxInput) WriteInputCommitment(w io.Writer) error {
=======
func (t *TxInput) WriteInputCommitment(w io.Writer, serflags uint8) {
>>>>>>> fixing up outputids and unspentids
   if t.AssetVersion == 1 {
       switch inp := t.TypedInput.(type) {
       case *IssuanceInput:
           _, err := w.Write([]byte{0}) // issuance type
           if err != nil {
               return err
           }
           _, err = blockchain.WriteVarstr31(w, inp.Nonce)
           if err != nil {
               return err
           }
           assetID := t.AssetID()
           _, err = w.Write(assetID[:])
           if err != nil {
               return err
           }
           _, err = blockchain.WriteVarint63(w, inp.Amount)
           return err

       case *SpendInput:
<<<<<<< f0c469ed8fa7e0af92dec7441d42dca5e4199826
           _, err := w.Write([]byte{1}) // spend type
           if err != nil {
               return err
           }
           _, err = inp.OutputID.WriteTo(w)
           if err != nil {
               return err
           }
           err = inp.OutputCommitment.writeTo(w, t.AssetVersion)
           return err
=======
           w.Write([]byte{1}) // spend type
           inp.OutputID.WriteTo(w)
           if serflags&SerPrevout != 0 {
               inp.OutputCommitment.writeTo(w, t.AssetVersion)
           } else {
               prevouthash := inp.OutputCommitment.Hash(t.AssetVersion)
               w.Write(prevouthash[:])
           }
>>>>>>> fixing up outputids and unspentids
       }
   }
   return nil
}

Spec deviation: input writes prevout unconditionally for all serialization flags

TxInput always writes the entire prevoutput without checking the serialization flags: https://github.com/chain/chain/blob/9731c85c3473d9ededb00e63c530db8b8c16a64f/protocol/bc/txinput.go#L325

  • Per current rules, it should omit the prevout entirely in the context of txid serialization.
  • Per PR #270's rules it should switch between writing the prevout as-is and writing it's hash if we are computing txid.

This issue will most likely be resolved by PR #270.

See also discussion here: #270 (comment)

Add decimal support to the amount field

Although there are workarounds, it would be very helpful to have decimal support to the amount field in transactions. For example, if my asset is a currency (USD, EUR, etc) it would be easier to work with floating point numbers that to use strictly integer numbers.

Make Java SDK 1.7 compliant

Environment

  1. Chain Core Version: latest
  2. Chain Core SDK Language and Version: Java latest
  3. Host: Docker on linux

Issue

To achieve a more broad usage of the Java SDK it would be nice to make it Java 7 compliant.
I've downloaded the source code and there are a few steps to convert the SDK to Java 7.

  1. com.chain.http.Client -> remove lambdas (just create 3 inner classes o anonymous classes)
  2. com.chain.api.PagedItems -> implement the remove method (the Java 8 default is throwing a UnsupportedOperationException("remove"))
  3. com.chain.signing.HsmSigner.addKey(String, Client) -> add type String to new ArrayList<>

Docker - Chain Core Initialization Error

Environment

Chain Core Version - chaincore/developer:latest (docker image)
Host - Docker

What steps will reproduce the problem?

docker run -it -p 1999:1999 chaincore/developer

What result did you expect?

Initialization successful.

What result did you observe?

Initializing Chain Core...

error: init schema creating migration table: dial tcp `MY_PUBLIC_IP:5432: getsockopt: connection refused
Listening on: http://localhost:1999
Client access token:
Chain Core is online!

Looks like the the migration is not able to proceed because it cannot to the database. Is there a way to configure the connection on the docker setup?

Submitting a transaction with nil value crashes core

Environment

  1. Chain Core Version: latest build
  2. Chain Core SDK Language and Version: Ruby
  3. Host: local

What steps will reproduce the problem?

Attempt submitting a transaction with a nil value. If using the Ruby SDK, attempt submitting a tx such as client.transactions.submit(nil) to replicate

What result did you expect?

Expected a helpful error to return to point me in the right direction of what was causing it, and expected Chain Core to continue running

What result did you observe?

This crashed the locally running Chain Core, throwing the following error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x14376a]
goroutine 343 [running]:
panic(0x64f240, 0xc42000c0a0)
	/usr/local/Cellar/go/1.7.1/libexec/src/runtime/panic.go:500 +0x1a1
chain/core.(*Handler).finalizeTxWait(0xc4201c80b0, 0x153e160, 0xc4202f97a0, 0xc4201ec0c0, 0x0, 0xc420ccb250, 0xc4202d6810)
	/Users/chrisgarvin/src/chain/core/transact.go:164 +0x3a
chain/core.(*Handler).submitSingle(0xc4201c80b0, 0x153e160, 0xc4202f97a0, 0xc4201ec0c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/Users/chrisgarvin/src/chain/core/transact.go:102 +0x116
chain/core.(*Handler).submit.func1(0xc4201c80b0, 0x153e1a0, 0xc4202d6810, 0xc4202dab00, 0x1, 0x4, 0x0, 0xc420ccb1f0, 0x1, 0x1, ...)
	/Users/chrisgarvin/src/chain/core/transact.go:259 +0xec
created by chain/core.(*Handler).submit
	/Users/chrisgarvin/src/chain/core/transact.go:267 +0x16a

core: 500 forwarding to leader

Ran into this error while testing #486.

72010274Z status=500 chaincode=CH000 path=/build-transaction error="json: cannot unmarshal array into Go value of type map[string]interface {}"
/Users/jackson/src/chain/core/rpc/rpc.go:67 - chain/core/rpc.(*Client).Call
/Users/jackson/src/chain/core/api.go:331 - chain/core.(*Handler).forwardToLeader
/Users/jackson/src/chain/core/transact.go:78 - chain/core.(*Handler).build
/Users/jackson/src/chain/core/api.go:127 - chain/core.(*Handler).(chain/core.build)-fm
/usr/local/go/src/runtime/asm_amd64.s:516 - runtime.call128
app=cored buildtag=dev processID=chain-mba.local-25974-d430288902dc23074f0e reqid=36285cb64c07e3db6e75 at=http.go:52 t=2017-02-08T00:57:40.975637505Z status=500 chaincode=CH000 path=/build-transaction error="json: cannot unmarshal array into Go value of type map[string]interface {}"

Creation of transaction fails for Assets created from dashboard with "Generate new MockHSM key" option

Hi,

Creation of issuance transaction fails for Assets created from dashboard with "Generate new MockHSM key" option.

Works well for the assets created using dashboard If you provide an existing xpub key.

Environment

  1. Chain Core Version - 1.0.2
  2. Chain Core SDK Language and Version - Ruby
  3. Host (e.g. Mac App, Windows App, Docker) - Windows 8.1

What steps will reproduce the problem?

  1. Create an asset from dashboard with Generate new MockHSM key option
  2. Create an account
  3. Submit issuance transaction using API

What result did you expect?

Transaction successfully created..

What result did you observe?

Chain::APIError: Code: CH735 Message: Transaction rejected Detail: validation fa
iled in script execution, input 0 (program [0x7b7d DROP DUP TOALTSTACK SHA3 0x58
7f76b87e2a4151a0331037ecd15cc7a4a76d249ce15669543ab8d5417cc293 0x01 0x01 CHECKMU
LTISIG VERIFY FROMALTSTACK FALSE CHECKPREDICATE] args [ 207d71e944839c0e76cb0c37
839b64c8a252c6252e95647a370a5f51701ae152c0ae87]): VERIFY failed Request-ID: 5af7
4f4ca6f53b5c71b1

Breaking changes to the Protocol 1

This is a draft of a list of breaking (hard forking) changes for P1 before we fully commit to it long-term.

Each feature has its own PR open which is meant to be merged when fully implemented and covered with tests.

☑️ 1. Asset ID without version

Asset ID should not include version tag to allow their reuse in new version of outputs. Initially it was thought as a necessary security feature for non-upgraded nodes, but balancing rules are enough to force assets to balance in such case. And removing version tag allows nice reuse of already deployed asset ids in the new versions of outputs.

PR: #261

☑️ 2. Drop RIPEMD160 and SHA1

They are unnecessary, outdated, and (in the case of SHA1, anyway) weak.

PR: #268

☑️ 3. Fixed-length outpoint format

SHA3(txid || index || SHA3(output)) - constant size, commits to the output contents directly for HSM-friendliness.

PR: #417

☑️ 4. Get rid of all Optional Hashes

Optional Hash is a premature optimization that mixes two concerns: minimizes amount of data to transmit when the hashed string is actually empty, and minimizes amount of data to be hashed. The first one belongs to the transport protocol that can compress common patterns (we have an assumption that such protocol will exist already for other pieces). The second concern (hashing a bit more data) seems to not a serious overhead VS added complexity to implementation.

This is going to be a side effect of the new txgraph redesign. See below.

5. Transaction entries

Replace two lists of inputs/outputs with a single flat list of entries. Entries can be issuances, inputs, outputs or retirements. We can remove OP_FAIL opcode. We can have multiple "reference data" entries, so each party in the transaction can attach that data. Arbitrary accounting rules and behaviours can be added with future entry types.

PR: #264

6. Review the implementation to check if NOPs are disallowed in tx v1 and allowed in unknown versions.

We need to carefully review implementation of extensibility rules.

7. Refactor tx validation spec (compatible change)

Change validate+validate-well-formedness+apply to validate+apply procedures for txs. Same for block.

8. THRESHOLD instruction

It is extra-hard to combine programs in multisig manner which is necessary for consensus programs. Since block VM version is hard to change, we should have a workable THRESHOLD instruction to make it possible to combine individual programs. This will not eliminate, but augment CHECKMULTISIG.

For instance, it could be a TAKE instruction, which takes m specified unique indexes from a set of items: index[m-1] ... index[0] items[n-1] ... items[0] m n TAKE -> items[index[m-1]] ... items[index[0]]. This is the start of a more flexible approach to CHECKMULTISIG and other threshold-type operations.

PR: TBD.

Dropped proposals:

Remove runlimit argument from CHECKPREDICATE

Program can be tested to be well-formed and in-bounds offline, so the network does not need to impose extra internal runtime limits in addition to the already imposed per-input runlimit.

PR: #262

UPDATE: Ivy will target VM2 anyway, so we can avoid cosmetic changes to VM1.

Explicit VERIFY

Simplify VM validation rules by having outer program and OP_CHECKPREDICATE programs not fail, instead of requiring true value on stack.

PR: #263

UPDATE: Ivy will target VM2 anyway, so we can avoid cosmetic changes to VM1.

Fixed-length integers on VM stack

In the transaction format, all integers are varints, but on the stack they should be fixed-length 8-byte signed ints for simplicity of implementation and security of smart contracts: fixed-length ints can be concatenated with other data without extra precautions like length-prefixing or extra hashing.

PR: #267

UPDATE: Ivy will target VM2 anyway, so we can avoid cosmetic changes to VM1.

Reorder msg in CHECKMULTISIG and CHECKSIG

So that signed message is the first argument, rather than in the middle, making them easier to compile to.

PR: Not to be done in P1.

UPDATE: Ivy will target VM2 anyway, so we can avoid cosmetic changes to VM1.

CHECKMULTISIG -> TAKE

It is extra-hard to make multisig rule out of signature programs (when each signature is actually a signed program). We need to generalize CHECKMULTISIG to support a selection of a subset (m-of-n) of programs to be executed.

For instance, it could be a TAKE instruction, which takes m specified unique indexes from a set of items: index[m-1] ... index[0] items[n-1] ... items[0] m n TAKE -> items[index[m-1]] ... items[index[0]]. This is the start of a more flexible approach to CHECKMULTISIG and other threshold-type operations.

PR: Not to be done in P1.

Cannot specify filter using spent_output property

Bug does not appear in 68c85f9 (tag sdk.ruby-1.0.1), but appears in 7579cb1

Queries such as the following used to be valid:

inputs(spent_output.transaction_id='ad929976dbde976d2b74b798632c762fb8146ad0c0d1e0d883487fcee1ebb2c4')

However, I now get the following error message:

Code: CH602 Message: Malformed query filter Detail: invalid attribute: spent_output Request-ID: b2e381c701e04ec73e65

spent_output is deprecated in 1.1, but should still work until the property is fully removed.

Transaction witness hash computed incorrectly

The spec includes the hash of the transaction common witness fields when computing the transaction witness hash, but the implementation does not.

(The fact that the common witness fields are presently empty doesn't mean that we can omit the empty-string's hash from the computation.)

Dashboard produces "blank check" error when "unbalanced transaction" would be more helpful

Create a transaction in the dashboard with some inputs and no outputs, and try to submit. The error you get is CH705: "Unsafe transaction: leaves assets to be taken without requiring payment." But that error is intended for users of the API, where unbalanced transactions might legitimately need to be created for e.g. atomic swaps. The dashboard deals only in balanced transactions, so "unbalanced transaction" would be a more informative and actionable error message.

Restore spent_output annotation to tx inputs

#421 removed the spent_output annotation for transaction inputs. We merged this change for the sake of moving forward with a big PR (nice work @oleganza!), but there are good reasons to retain that annotation--it provides easy access to transaction history (a feature of past use cases), and in any case, we should avoid the breaking change. I spoke with Oleg and he proposed filing this issue after landing the initial change.

Failing to add key to SDK signer produces misleading error message

Reproducible on 3579bf1

If you attempt to sign for a spending transaction without first adding the relevant key to the SDK signer object, the signer will not add a signature to the transaction. During the submit step, you are not told that signatures are missing; instead, you will receive a CH736 "Transaction not final" error. While this error is technically correct, it is not the right one to deliver to the user. The error should be that the signature is missing, and possibly hint that an xpub needs to be added to the client-side signing object.

The following Ruby script can recreate the problem:

$:.unshift ENV['CHAIN'] + '/sdk/ruby/lib'

require 'chain'

c = Chain::Client.new
s = Chain::HSMSigner.new

asset_key = c.mock_hsm.keys.create
s.add_key(asset_key, c.mock_hsm.signer_conn)

account_key = c.mock_hsm.keys.create
# Don't add the key to the signer
#s.add_key(account_key, c.mock_hsm.signer_conn)

asset = c.assets.create(root_xpubs: [asset_key.xpub], quorum: 1)
acc = c.accounts.create(root_xpubs: [account_key.xpub], quorum: 1)

c.transactions.submit(s.sign(c.transactions.build { |b|
  b.issue asset_id: asset.id, amount: 100
  b.control_with_account account_id: acc.id, asset_id: asset.id, amount: 100
}))

c.transactions.submit(s.sign(c.transactions.build { |b|
  b.spend_from_account account_id: acc.id, asset_id: asset.id, amount: 1
  b.retire asset_id: asset.id, amount: 1
}))

The immediate fix for this should be to provide a better error message.

Some suggestions for longer-term fixes:

  1. The SDK signing interface's sign method could return the number of signatures added. Based on the semantics of the multi-HSM signing interface, it is not an error to add zero signatures, but the user could check.
  2. Move away from the multi-HSM signing interface, which is solving a future problem (having multiple HSMs in the same application) by making the 99% common case harder and more bug-prone. Adding xpubs to the signer provides for HSM routing that does not actually occur in practice. Up front, we can provide sign/sign-batch methods directly from the Mock HSM, and remove the artificial constraint that the client must add xpubs to the signer API. This would remove the "Load key" step in our setup workflow, which is a significant efficiency.

NodeJS snippet documentation missing in Chain Core documentation

Thank you for reporting an issue! Please read the following before submitting:

Github issues are for reporting specific bugs with Chain Core Developer Edition. If you have general questions about Chain Core or questions regarding building an application on Chain Core, please visit our developer support center:
https://support.chain.com

Enterprise customers can reach out to their support contact directly.

Please check open issues prior to opening a new issue:
https://github.com/chain/chain/issues

Please provide the following information. We will only respond to issues that conform to the following format:

Environment

  1. Chain Core Version: N/A
  2. Chain Core SDK Language and Version: N/A
  3. Host (e.g. Mac App, Windows App, Docker): N/A

What steps will reproduce the problem?

Try to supply the shortest, simplest set of instructions that reliably demonstrates the problem. Complete running programs or screenshots are helpful too.

A recent change in the site seems to have broken the NodeJS snippets. Visit: https://chain.com/docs/core/build-applications/transaction-basics

What result did you expect?

NodeJS snips

What result did you observe?

"Snippet "multiasset-within-core" is not in "core/examples/node/transactionBasics.js"

Remove xpub whitelist requirement from signerd requests

We'd like to rollback the changes introduced by 7394ca1, which required signer clients to provide a whitelist of xpubs that the signer was allowed to use. This whitelist was in addition to the list of xpubs already present in the transaction template(s) included in the same signing request.

  • Remove the logical requirement, enforced by this block of code
  • Update the SDKs so that they no longer transmit this whitelist.
  • Clean up any remaining code in the core libs related to serializing/deserializing the whitelist.

This will break compatibility between new versions of the SDK and old versions of the core, so it probably constitutes a minor version bump.

cc: @bobg @kr @tarcieri

Add back checks for unknown extensible-string suffixes

#426 removed some (misplaced) checks on various commitment and witness structures to ensure that they contain no unrecognized additional data (for known data versions - unrecognized data is fine in unknown versions). Those checks need to be added back in a few places in validation.

core/query: race condition annotating remote issuances

There's a race condition when a block lands that includes a transaction issuing a new, remote asset. Since block indexing callbacks run in parallel now, it's possible for the core/query callback to annotate the transaction before the core/asset package has recorded the new asset in the database. The resulting annotated transaction will be missing the asset definition.

Transactions using the asset in any subsequent blocks will be correctly annotated with the asset definition.

Possible solutions:

  1. Don't run block callbacks in parallel. This would impact performance.
  2. Include the block height when calling out to the asset package to annotate transactions. In the asset package's annotation function, explicitly wait until the asset package's callback has finished indexing the block at the provided height.
  3. When annotating a transaction, if the asset isn't indexed in the database, manually pull the asset definition out of the input's issuance program. We'll also need to keep it around through the duration of annotating the block, because other inputs/outputs/transactions within the block may also refer to the new asset.

Access token list API does not return pagination information

In a core with > 100 tokens, the next key in the response from /list-access-tokens for the first page of results is:

next: {page_size: 0, timeout: 0, after: "", type: "client"}

after: "" cannot be used to build a query for the next page.

Tx is not "final" in certain cases

When a transaction is constructed in multiple steps, all steps but the last one must specify "allow additional actions" to instruct the signing phase to produce signature programs that commit only to the aspects of the tx already present (so that additional actions don't invalidate the ones that have been signed). But for idempotence, and to harden against certain kinds of replay attacks, when publishing the tx we require the presence of at least one input that commits to the tx as a whole via its hash.

But this doesn't work when the last build step involves only issuance inputs. (Exception: when all inputs are issuances.) That's because the check for tx finality is looking for the specific signature program

DATA_32 <hash> TXSIGHASH EQUAL

but we don't use signature programs for issuances.

This is producing spurious "transaction is not final" errors.

core: nil pointer dereference on submit

While running the java integration tests locally, I noticed the log lines:

Running com.chain.integration.FailureTest
app=cored buildtag=dev processID=chain-mba.local-40566-65f0aa67d90cb6706c94 reqid=728c36bc5c57d812ae2f at=http.go:52 t=2017-01-19T00:09:23.310804967Z subreqid=1f2aef6136b76285693e status=400 chaincode=CH202 path=/create-account error="at least one xpub is required"
app=cored buildtag=dev processID=chain-mba.local-40566-65f0aa67d90cb6706c94 reqid=6f677949f17bafae0937 at=http.go:52 t=2017-01-19T00:09:23.314696257Z subreqid=915b5c75c5130998fda5 status=400 chaincode=CH202 path=/create-asset error="at least one xpub is required"
app=cored buildtag=dev processID=chain-mba.local-40566-65f0aa67d90cb6706c94 reqid=051b9969d2a994a8bc0b at=http.go:52 t=2017-01-19T00:09:23.319110197Z subreqid=0889080bd5743baac20b status=400 chaincode=CH003 path=/create-control-program error="unknown control program type \"\": httpjson: bad request"
app=cored buildtag=dev processID=chain-mba.local-40566-65f0aa67d90cb6706c94 reqid=a31e023083b4da3276f7 at=http.go:52 t=2017-01-19T00:09:23.346207285Z subreqid=cd23151fcadb8c0e96bf status=400 chaincode=CH706 path=/build-transaction error="errors occurred in one or more actions"
app=cored buildtag=dev processID=chain-mba.local-40566-65f0aa67d90cb6706c94 reqid=54c533e2ab27e673589f at=http.go:52 t=2017-01-19T00:09:23.372051558Z subreqid=6f15cea3e09d970c7000 status=500 chaincode=CH000 path=/submit-transaction error="runtime error: invalid memory address or nil pointer dereference"

There's a nil pointer dereference at the end of snippet. This test is designed to fail, so the integration test still passes. However, it should probably fail some other way than a nil pointer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.