Giter Club home page Giter Club logo

diamond-contracts-core's People

Contributors

andogro avatar avasilevichaet avatar d10r avatar demimarie avatar dforsten avatar i-stam avatar igorbarinov avatar kotsin avatar phahulin avatar surfingnerd avatar varasev avatar vbaranov avatar vkomenda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

diamond-contracts-core's Issues

regression test for contract updates

#107
contract update are a crucial task.
in an update situation where contract set A works fine, and contract set B works fine, it is not garanteed that the update will work fine.
The biggest danger are contract data structure updates, that re not supported because of the Proxy Pattern.

We need to define a system that target this issue.

early epoch end

Triggers if the number of failing validators gets closer and closer to the critical point, where no consensus can be made any more.
to be defined what is the critical point.
The Node implementation is discussed here: DMDcoin/diamond-node#87

security concept for contract updates

contract upgrade security is handled by an owner address.
this should not be a single point of failure admin address,
but probably should also not be the DAO itself,
since a problem within the contract's could make the DAO
that operates on the basics of the contracts could become inoperational.
Considering a multisig solution with voted (DAO, maybe only signal voting ?).

Contract updates need to be atomic most of the times.
The Diamond Framework with the DAO and the HBBFT POSDAO is a set of contracts, that need to get updated as a whole.
Updating only parts of a tested set is inacceptable.

concept

Ethereum does not allow to bulk more than one function call within one transaction.
Therefore we need to split it up.

  • deploy the new contracts. (they are inactive after deployment)
  • retrieve the addresses of the new contracts.
  • dynamically create a multisend transactions that include the has all relevant calls for the update (upgradeTo and upgradeToAndCall)
  • create a upgrade proposal in the DAO (execute the multisend.)
  • wait to collect enough votes.

PoC implementation:

concept branch:

Resouces:

contract update tool needs to get extended:
#107

All Nodes dropped out because of one slow node.

Miner: 0x0289834bc149Bc7D4C6725eD32c82c2eeA12963f
last life sign: block 100424 => written parts together with 7 other nodes.
http://explorer.uniq.diamonds/blocks/100424/transactions

One node has written one PART one block later 0xfD211FC852Fb0E4a1709427eDBF2B1070eD61F83 -> it's fine.

One node Missed out writting his Parts: 0xeF3C53ec3BaF974C36eF776Df4b992a735e31C55 (miner: 0xa9880f72792E25db96Dbf7767F6501f20b8Ea2C3)-

he did write it's part much later at block

#100453 0xa9880f72792E25db96Dbf7767F6501f20b8Ea2C3 has written a part.
http://explorer.uniq.diamonds/tx/0x727c99a26a3e2092744888164943035cd7de824b6fb9020ca7c91db7f2093682/internal-transactions

423 03:17:14 (16 txs - )
.
.
.
452 03:25:23
453 03:25:43 - transaction got included.
454 03:26:07
455 03:26:29

Now all PARTS were written, nodes should write their ACKS now.
http://explorer.uniq.diamonds/blocks/100455

2 blocks later, they seem to get kicked out (only verifyable in UI):
http://explorer.uniq.diamonds/blocks/100455

one block later, some
http://explorer.uniq.diamonds/blocks/100456

one of this nodes was hbbft20
0xd79cb3484906d9f235e15ff4c94715985d7fb568.
he was one of the earliest who did write his part at block 423

So the node got kicked out by bad luck timing, because one node was slow:
0xa9880f72792E25db96Dbf7767F6501f20b8Ea2C3

restart of those nodes happened roundabout block 100640
http://explorer.uniq.diamonds/blocks/100640/transactions

during the whole key-gen-phase, 100418 - 100463, hbbft26 was connected with hbbft20 and did send consensus messages - but the Part Transaction was never received.

Epoch switch implementation

Implementation of Phase 2, where the Epoch switch is going to be processed: finalizeChange()

According to: https://github.com/DMDcoin/wiki/wiki/Integrating-Honey-Badger-BFT-with-Ethereum-POSDAO-Contracts

  • Support for defining the transition time window
  • tests for invalid transition time window values
  • transition to phase 2
  • integrating finalizeChange() in reward() to make the epoch transition
  • investigate: randomNumber can be the blockhash for the newValidatorSet function ?

50k stacking limit

For coineconomic reasons a staking limit of 50K is planned.
That means following operations are not allowed:

  • creating a validator with more than 50K stake
  • delegating on a Node what would result in a Node that has 50K+ stake.
  • moving a delegation stake to a new validator what would result into a 50k+ stake.

When developing something like automatic epoch reward restaking this limit should be ignored.

  • new variable
  • initializer contract
  • initializer test
  • network creation scripts
  • contract upgrade handling
  • unit test failiure inclusion

Contract Simplifications

Some parts of the contracts contain logic that is not required in this huge complexity anymore.
For example: RandomHbbft.sol

The random contract get's called every block and has dependencies of the Random Contract.

Investigate if those functions can be deleted:

  • RandomHbbft: BlockReward
  • RandomHbbft: OnlyInitialized and validatorSetContract

Block close issue

Setup

1 Moc + 6 Nodes + 1 RPC Node.
Started with 7, switched contract setting to 4 Validators.
1 Node got flagged as unavailable, because it could not write parts of it's transactions.
This leaves 5 potential validators .
4 Nodes got selected as a pending validator, but they are unable to write there ACKS (reason unknown, not part of this issue)

expected behavor:

All 4 Nodes get flagged as unavailable, leaving 1 available node behind that becomes the one and only pending validator.
The one node that is able to take over, is also a current node.

actual behavior:

The contract call crashes.

automatic reward restaking

The reward payout functions seems to be very complicated, and there are a lot of voices to simplify that.

    /// @dev The reward amount to be distributed in native coins among participants (the validator and their
    /// delegators) of the specified pool (mining address) for the specified staking epoch.
    mapping(uint256 => mapping(address => uint256)) public epochPoolNativeReward;

It would simplify a lot connected to claiming the reward like

epochsToClaimRewardFrom
_epochsPoolGotRewardFor
claimReward
...

The Restaking Reward solution is so obvious, why has it not been the solution first hand ?
Rumors tell that this was changed by POA Network after a first audit round ?
Or has it been the case that "send rewards to target address" has been the first solution ?!

Note that automatic reward restaking should be able to bypass the hard limit of 50k staked on one node: #80

HBBFT service transaction permissioning

In the current state,
a pending validator can send multiple free time his ACKS and PARTS for the shared key,
since a validator is allowed to send, if he is a isPendingValidator.

To make this consistent with other service transaction logics,
it should be caged to allow only to write of an ACK or PART if it has not been already happen for a pending validator.

Develop staking & withdraw timelock

During development of tests (#12), this requirement has been commented out.
contracts/base/StakingHbbftBase.sol:980

including the test "should fail if stacking time is inside disallowed range"

test/StakingHbbft.js:184

It's was not well implemented and the problem evm_increaseTime doesn't help at all.
#13

see checkin: 0688700

Tests fail on real network.

After executing the tests against a real network,
like https://github.com/DMDcoin/honey-badger-testing/tree/35891e64bcef01cdd80cbf4e732b1120332c2220/testNode
a lot of tests that succeed in ganache, but fail in the real network.
Those that fail are all "expected promise to be rejected with an error including" tests.

Truffle sees the transaction as success in this case, but the status is false.
There is no log or logBloom involved that give more hints.

Failed Transactions and Transaction Receipt.

{{ blockHash:
   '0x782deba62ae6a27d22ad59ea3673e9fca06547d509277e56069a65462c838c11',
  blockNumber: 108,
  chainId: null,
  condition: null,
  creates: null,
  from: '0x810098c4632A8dB0160b7559ef9D64532898e0A9',
  gas: 8000000,
  gasPrice: '20000000000',
  hash:
   '0xd79bedb5d1ea769b7a86f887e218b82beaa866fac6603a75c25e5c020f51b337',
  input:
   '0x56e4d6c2000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
  nonce: 1,
  publicKey:
   '0x2aed4712e13175bbed2eb1ffc6111181b00bc162ce8d282549d26460628710352995fee0863ee66ddab35c29652ec4ae6eb698fffb02cdaa01e23c2929c34060',
  r:
   '0x56210735c6035f1657b7ed7951c25412ca1ffc219e89c7f3e77c9b37e55afb11',
  raw:
   '0xf90132018504a817c800837a1200948a8db1d6da9df1bdffe118063347813ddebf929d880de0b6b3a7640000b8c456e4d6c20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001ba056210735c6035f1657b7ed7951c25412ca1ffc219e89c7f3e77c9b37e55afb11a028c986be271b1163d5125ab8a4c7f62ca559e1c789d877e6fac506648f4fa38a',
  s:
   '0x28c986be271b1163d5125ab8a4c7f62ca559e1c789d877e6fac506648f4fa38a',
  standardV: '0x0',
  to: '0x8A8Db1d6DA9DF1bdFfE118063347813DDebf929D',
  transactionIndex: 0,
  v: '0x1b',
  value: '1000000000000000000' }
{ blockHash:
   '0x782deba62ae6a27d22ad59ea3673e9fca06547d509277e56069a65462c838c11',
  blockNumber: 108,
  contractAddress: null,
  cumulativeGasUsed: 27726,
  from: '0x810098c4632a8db0160b7559ef9d64532898e0a9',
  gasUsed: 27726,
  logs: [],
  logsBloom:
   '0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
  status: false,
  to: '0x8a8db1d6da9df1bdffe118063347813ddebf929d',
  transactionHash:
   '0xd79bedb5d1ea769b7a86f887e218b82beaa866fac6603a75c25e5c020f51b337',
  transactionIndex: 0 }

Included automated source code upload to blockscout

Example:

curl -d '{"addressHash":"0xc63BB6555C90846afACaC08A0F0Aa5caFCB382a1","compilerVersion":"v0.5.4+commit.9549d8ff", "contractSourceCode":"pragma solidity ^0.5.4; contract Test { }","name":"Test","optimization":false}' -H "Content-Type: application/json" -X POST "https://blockscout.com/poa/sokol/api?module=contract&action=verify"

More info and "Try it out" section is at /api-docs page of Blockscout instanc

enable announceAvailability as Service Transaction

"announceAvailability" is currently not treated as free of charge service transaction.
since validator Nodes should call this free of charge, this requires must be done.

The automatic tests should also verify the free execution of those calls,
likewise it's implemented for writeAcks and writeParts.

contracts update tool

contracts:
compare current code with deployed code,
detect different contracts to update.
make create call for all contracts.
execute a transaction that executes the switch to new contract address

partial epoch payouts

Sometimes an epoch is not run to it’s full planned length.
for example because a fast validator set upscaling triggers. see #90

Instead of paying out the full amount for the epoch length,
only a fraction < 100% should be payed out.

Reactivate Tx Permission

Starting the chain with the txPermission Contract enabled resulted in

Error calling tx permissions contract: "Invalid name `please ensure the contract and method you\'re calling exist! failed to decode empty bytes. if you\'re using jsonrpc this is likely due to jsonrpc returning `0x` in case contract or method don\'t exist`"

This hardening got removed and can be reapplied later.

Testing Problems with ugradeable contracts

The _admin() check does not work out as expected in Unit Tests.

    /**
     * @return The admin slot.
     */
    function _admin()
    internal
    view
    returns (address adm) {
        bytes32 slot = ADMIN_SLOT;
        assembly {
            adm := sload(slot)
        }
    }

returns as expected the address of the creator of the Contract.
But the checks that prove that:

require(msg.sender == _admin() || block.number == 0, "Initialization only on genesis block or by admin");

this reverts throw during test execution without a known reason.

sender and admin are equal:

testresult:

Result {
  '0': '0x5b305B732553Db714726E48Db99916fEf312125A',
  '1': '0x5b305B732553Db714726E48Db99916fEf312125A',
  sender: '0x5b305B732553Db714726E48Db99916fEf312125A',
  admin: '0x5b305B732553Db714726E48Db99916fEf312125A' }

function definition:

    function getInfo()
    public
    view 
    returns (address sender, address admin) {
        return (msg.sender, _admin());
    }

faster validator set upscaling

Sometimes we have small validator sets,
and during the epoch - additional validators become available again.

It is desired that the small validator set scales up to a bigger validator set in this case.
This must respect sweet spots in validator counts as well. see: #84

The rule should only apply for Validator Sets smaller than 16. (~ 2 / 3 of the max validator count)

network stuck at block 52792

Block 52793 could not be produced.

Background:
A Network upgrade caused this failed state,
most likely connected with the implementation of the Key Gen Round Counter:
DMDcoin/openethereum-3.x#46
#108

The deployment was done during the time window of Phase 1 within an epoch,
consisting of those steps.

Unfortunatly i still could not manage to deploy the source code in blockscout:
DMDcoin/blockscout#12

Testing: Problems with evm_mine and evm_increaseTime

The function calls in truffle ganache are designed for simulating mined blocks and forwarding time.
Within the tests, those function calls created problems instead of solutions.
The current solution now uses Sleep, what is not very good for automated tests,
since a huge testsuite might just take forever.

  function sleep(milliseconds) {
    return new Promise(resolve => setTimeout(resolve, milliseconds));
   }

see this checkin: 79ff648

ERC900: Simple Staking Interface Support

ERC900: Simple Staking Interface Support could be very beneficial because it simplifies other Contracts and DApps to interact with HBBFT-POSDAO, even if only reading the information is allowed.
For example Aragon supports ERC-900.

ethereum/EIPs#900

A Proof of concept implementation would help to find out if this is possible or not. (Read or Write).

Real OpenEthereum unit tests

We proposed a unit test scenario using the real OpenEthereum client,
configured to use milliseconds instead of seconds as time unit.
Therefore we can fast forward run it 1000 times faster,
and we can simulate a real network, having real reward() calls,
with the real set of contracts (instead of Mocks)
and also being able to analyse the real storage consumption

Phase 1&2 without a transition window ?

if too much time passes by, it is possible that the phase 2 of the transition is skipped.
In this case a candidate get's selected, has not the chance to write it's parts/acks and instantly gets marked as unavailable.
seems to happen once at first epoch, and if there is a problem to create a seal like:
DMDcoin/openethereum-3.x#27

Automatic validator count sweet spot targeting

Situation:
If there are less validators than max validators,
the contracts currently selects "all" validators.

In Unlucky situations, ALL validators can be a sour spot.
Like 6 Validators is a sour spot. It allows as many failing validator than the last sweet spot (4),
but has 2 additional validators that must not fail.

The sweet spots are:
4 - 7 - 10 - 13 - 16 - 19 - 22 - 25

The sour spots are in between. example:
4: good, a sweet spot 25% can fail.
5: bad: only 20% can fail
6: worst: only 16,66% can fail
7: good, the next sweet spot: 28.57% can fail.

Proposed solution:
If "all" validators count is not a sweet spot, there will be election with the number of validators equal to the biggest possible sweet spot.
Proposed solution: Only apply this rule for n > 4.

code coverage inclusion

We do have automated tests, but we don't have the information yet, what source parts of the contracts are covered by those tests, and what source parts are not.

Handle failing validator set switch

It is possible that a pending validator gets elected, but fails to send it's Acks and Parts to the KeyGenHistory.
In this case, the failing validators get banned for some time, and new backup validators need to get picked up.

Minimum gas price managed by contracts

Minimum gas prices need to get managed by the Contracts as well, since there is no "Node" as Block producer.
The OpenEthereum HBBFT requires to send the Transaction Fees to the BlockReward Contract. (see this Issue: DMDcoin/openethereum#11)
TODO:
Examine if it is more beneficial to have a spererated BlockReward and a EpochReward Contract, or if it has more benefit to have only 1 Contract that handles all types of rewards.

Investigate epoch by epoch claiming

The current implementation stores the epoch rewards in the BlockReward smart contracts.

Meaning it stores the pool data (stackers and stacking amount),
and for every epoch and every pool a reward.

see base/BlockRewardHbbftBase.sol:

  
    mapping(address => uint256[]) internal _epochsPoolGotRewardFor;

    /// @dev The maximum per-block reward distributed among the validators.
    uint256 public maxEpochReward;

    /// @dev The reward amount to be distributed in native coins among participants (the validator and their
    /// delegators) of the specified pool (mining address) for the specified staking epoch.
    mapping(uint256 => mapping(address => uint256)) public epochPoolNativeReward;

    /// @dev The total reward amount in native coins which is not yet distributed among pools.
    uint256 public nativeRewardUndistributed;

    /// @dev The total amount staked into the specified pool (mining address)
    /// before the specified staking epoch. Filled by the `_snapshotPoolStakeAmounts` function.
    mapping(uint256 => mapping(address => uint256)) public snapshotPoolTotalStakeAmount;

    /// @dev The validator's amount staked into the specified pool (mining address)
    /// before the specified staking epoch. Filled by the `_snapshotPoolStakeAmounts` function.
    mapping(uint256 => mapping(address => uint256)) public snapshotPoolValidatorStakeAmount;

    /// @dev The validator's min reward percent which was actual at the specified staking epoch.
    /// This percent is taken from the VALIDATOR_MIN_REWARD_PERCENT constant and saved for every staking epoch
    /// by the `reward` function. Used by the `delegatorShare` and `validatorShare` public getters.
    /// This is needed to have an ability to change validator's min reward percent in the VALIDATOR_MIN_REWARD_PERCENT
    /// constant by upgrading the contract.
    mapping(uint256 => uint256) public validatorMinRewardPercent;

The data is stored for every epoch, and every epoch get's claimed separately.
This involves a lot of loops and complexity in the claiming logic.
The benefit of doing so is not clear, and needs to get further investigated.
(what is the PRO side for this approach ?)

one Pro is:

  • It is tested
  • it is audited
  • it is used in the real world on xDai, we don't know problems with it.

On the Contra we have:

The storage and computational overhead of all this, comes with a heavy cost,
even limiting possibilities, like having shorter Epochs.

reactivate upgrade proxies in deployments

The call for Upgradable Proxies does have a lot of downsides when it comes to the ability to analyse what happens on chain.
Tools like truffle and blockscout do have problems with that.
Because of this, it is now temporarily deactivated, what means, update to this contracts are not possible.
But for the current state of the test networks, this is not really a problem.
This change was merged with this pull request: #71

For the next stage of a internal testnetwork, this feature should be reactivated again.

switch from a block based system to a time based system

AURA works fine seeing it as block based system.
All x seconds (typical 0 < x <= 60) a block is mined.
Therefore a Epoch length or Ban durations were defined as "Block numbers".

In HBBFT we need to think more about in Time, since in HBBFT there might be a block all X seconds, or there might be no block at all for a long time.

We need to switch to a time based system, instead of a block based system.
Essential switching out getCurrentBlockNumber() with getCurrentTimestamp(), probably removing getCurrentBlockNumber() completly and making all tests time based.

Fixing test errors (time)

Fixing test errors introduced by switching to time beased epochs instead of blockbased epochs in an AURA.

NVM Nodeversion streamlining

partially the project uses very outdated Node versions, like v10.
This makes a lot of things very difficult.
We should achieve an upgrade of the Node version V14 (latest LTS Version)

Finishing Contracts and Tests

After source code take over, the state of the contracts is not clear.
A first investigation showed that changes from a Block Number Based system to a Time Based System was started, but the tests have never been updated. Therefore it looks like an unfinished set of changes that might not be coherent.

Currently all contracts compile,
but a lot of the issues seem to be connected with the change that STAKING_EPOCH_START_BLOCK is not an argument for several calls anymore.

Could not decode event!

Developing Tests and Debugging is quite a mess right now.
The Nr1 Debug featre in Solidity doesn't work in the current setup.

It just bombards:

Events emitted during test:
    ---------------------------

    Warning: Could not decode event!

    Warning: Could not decode event!

    Warning: Could not decode event!

    Warning: Could not decode event!

    Warning: Could not decode event!

proportional epoch rewards for smaller validator sets

currently the payout for every epoch is the same size, regardless of the number of validators.

This means, that in a case of 4 validators, every validator receives 1/4 of the epoch reward,
what is a lot more than the 1/25 the validators receive for a full set.
This could be seen as a financial incentive to create only small sets, what increases the chances for failures.

In the future, only an according fraction of the reward should get payed out.
so a validator set of 10 will get 10/25 of the rewards.

Also validators that become unavailable during the epch ( flag: validator available since...) will not receive a reward.
This can happen if they get mallice reported during an epoch (feature not implemented yet.)

key generation round counter

currently the interface for writing the keys picks up the epoch number.

function writePart(uint256 _upcommingEpoch, bytes memory _part)

in the case of an addition key generation round, it is not clear if the written parts were planned for this key generation round or for another.
a transaction that was not successfully included in a block in keygeneration round 1 by a validator, could get included in the following key generation round (2) - but the values in this case are garbage.

Solution

extend the interface and track the numbers of key generation rounds.

function writePart(uint256 _upcommingEpoch, uint256 _keyGenRound, bytes memory _part)

24KB size limit

The contracts now exceed the 24kb size limit.
The issues in the testing has been solved: 2595633
DMD will also support contract sizes > 24kb anyway, but we need to configure the parity clients to support this.

switch to github actions for CI

Out travis test pipeline does not work anymore because of changes on travis' side.
and the upgrade also ask for high permission level like
"act on your behalf" - what is a high security risk: https://games.greggman.com/game/github-permission-problem/
There is also a lot of discussion going on for pricing open source projects.
The huge delay that Travis had between a push and a test has been a obstacle as well.
We can reduce some troubles by switching to github actions.

Reactivate msg.sender == _admin() check for initialization.

During testing, one "security" check kept failing in tests #28

The following check got removed in several contracts:

require(msg.sender == _admin() || block.number == 0, "Initialization only on genesis block or by admin");

also the corresponding test has been documented out:
should fail if initialization is not done on the genesis block and sender is not admin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.