Giter Club home page Giter Club logo

besu-pro-testnet's Introduction

LACCHAIN

--- This network is not active now. If you want to deploy your application in an ethereum-based LACChain network, check our Besu network ---

NOTES

This work was done by EVERIS and is completely donated to LACChain Consortium.

References

  • This Quorum network uses IBFT consensus with validator and regular nodes located around Latin America and the Caribbean.

  • In this installation we will use Ubuntu 18.04 as the operating system and all the commands related to this operating system. In addition, links of the prerequisites will be placed in case it is required to install in another operating system.

  • An important consideration to note is that we will use Ansible, for which the installation is done from a local machine on a remote server. That means that the local machine and the remote server will communicate via ssh.

  • The Github of Alastria can be followed as a reference https://github.com/lacchain/lacchain-node

System Requirements

Characteristics of the machine for the nodes of the testnet:

  • CPU: 2 cores

  • RAM Memory: 4 GB

  • Hard Disk: 30 GB SSD

  • Operating System: Ubuntu 16.04, Ubuntu 18.04, CentOS 7.4 or Red Hat Enterprise Linux 7.4, always 64 bits

It is necessary to enable the following network ports in the machine in which we are going to deploy the node:

  • 8443: TCP - Port to monitor

  • 9000: TCP - Port for communication for Constellation.

  • 21000: TCP/UDP - Port to establish communication between geth processes.

  • 22000: TCP - Port to establish RPC communication. (this port is used for applications that communicate with Lacchain and may be leaked to the Internet)

Prerequisites

Install Ansible

For this installation we will use Ansible. It is necessary to install Ansible on your local machine that will perform the installation of the node on the remote machine.

Following the instructions to install ansible in your local machine.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Clone Repository

To configure and install Quorum and Constellation, you must clone this git repository in your local machine.

$ git clone https://github.com/lacchain/Lacchain
$ cd lacchain/

Install Python

  • In order for ansible to work, it is necessary to install Python on the remote machine where the node will be installed, for this reason it is necessary to install python 2.7 and python-pip.

  • If you need to install python-pip in Redhat use https://access.redhat.com/solutions/1519803

$ sudo apt-get update
$ sudo apt-get install python2.7
$ sudo apt-get install python-pip

Quorum + Constellation Installation

Creation of a new Node

  • There are two types of nodes (Validator / Regular) that can be created in the Quorum network.

  • After cloning the repository, enter this.

    $ cd lacchain/
    
  • First change the IP located within the inventory file by the public IP of the remote server where you are creating the new node.

     $ vi inventory
     [test]
     192.168.10.72
    
  • To deploy a validator node execute the following command in your local machine, without forgetting to set the private key in the option --private-key and the ssh connection user in the -u option:

     $ ansible-playbook -i inventory -e first_node=false --private-key=~/.ssh/id_rsa -u vagrant site-lacchain-validator.yml
    
  • To deploy a regular node execute the following command in your local machine, without forgetting to set the private key in the option --private-key and the ssh connection user in the -u option:

     $ ansible-playbook -i inventory --private-key=~/.ssh/id_rsa -u vagrant site-lacchain-regular.yml
    
  • When starting the installation, it will request that some data be entered, such as the public IP, geth account password and node name. The name of the node will be the one that will finally appear in the network monitoring tool.

  • At the end of the installation, if everything is correct, a GETH service will be created in the case of a validator node managed by Systemctl with stop status.

  • In the case of a regular node if everything is correct, a CONSTELLATION service and a GETH service managed by Systemctl will be created with stop status.

  • Now, it is necessary to configure some files before starting up the node. Please, follow the next steps:

Docker

  • If you did not clone the LACCHAIN repository in previous steps, then you need to clone it now.
$ git clone https://github.com/lacchain/Lacchain
$ cd lacchain/ 

Docker Node Validator

  • First, you need to create a folder that will contain the files genesis.json, permissioned-nodes.json and static-nodes.json.
  • This folder will be the volume of the container on the host machine.
$ mkdir /lacchain/data
  • Copy the following files from lacchain:
$ cp /lacchain/roles/lacchain-validator-node/files/genesis.json /lacchain/data/genesis.json 
$ cp /lacchain/roles/lacchain-validator-node/files/permissioned-nodes_validator.json /lacchain/data/permissioned-nodes.json
$ cp /lacchain/roles/lacchain-validator-node/files/static-nodes.json /lacchain/data/static-nodes.json
  • Now pull the docker image and run the container, setting your node identity and the folder location that will be the volume
$ docker pull lacchain/validator:1.0.0 
$ docker run -dit -e IDENTITY={YOUR_NODE_IDENTITY} -v {QUORUM_DIR}/lacchain/data:/lacchain/data -p 21000:21000 -p 30303:30303 lacchain:1.0.0

Docker Node Regular

  • First, you need to create a folder that will contain the configuration files to geth and constellation.
  • This folder will be the volume of the container on the host machine.
$ mkdir /lacchain/data
  • Copy the following files from lacchain:
$ cp /lacchain/roles/lacchain-regular-node/files/genesis.json /lacchain/data/genesis.json 
$ cp /lacchain/roles/lacchain-regular-node/files/permissioned-nodes_general.json /lacchain/data/permissioned-nodes.json
$ cp /lacchain/docker/regular/configuration.conf /lacchain/configuration.conf
$ cp /lacchain/docker/regular/start-node.sh /lacchain/start-node.sh
  • Create a password file to generate the constellation keys
$ echo "Passw0rd" > /lacchain/.account_pass
  • Now pull the docker image and run the container, setting your node identity and the folder location that will be the volume
$ docker pull lacchain/regular:1.0.0 
$ docker run -dit -e IDENTITY={YOUR_NODE_IDENTITY} -v {QUORUM_DIR}/lacchain:/lacchain -p 9000:9000 21000:21000 -p 22000:22000 -p 30303:30303 lacchain/regular:1.0.0

Node Configuration

Configuring the Quorum node file

If the node was created from scratch, then the step Creating a new node modified several files in the local repository, namely:

Note that the names of the files refer to the nodes that use them, not the nodes that have modified them during their creation.

In addition to these changes, which occur automatically when executing the node creation, there are two other files that must be modified manually, depending on the type of node created, to indicate the contact data of the Administrator of the nodes (name, email, etc): [DIRECTORY_VALIDATOR.md] (DIRECTORY_VALIDATOR.md) or [DIRECTORY_REGULAR.md] (DIRECTORY_REGULAR.md)

Start up Regular Node

Once we have modified these files, you can start up the node with this command in remote machine:

<remote_machine>$ systemctl start constellation
<remote_machine>$ systemctl start geth

Start up Validator Node

On the other hand, if the node is a validator, the rest of the nodes in the network must execute to update permissioned-nodes.json file:

$ ansible-playbook -i inventory -e validator=yes -e regular=no --private-key=~/.ssh/id_rsa -u adrian site-lacchain-update.yml

after the validator nodes add to the new validator node in their permissioned-nodes.json file of nodes allowed, execute in remote machine the next command:

<remote_machine>$ systemctl start geth

Then, the file ~/lacchain/logs/quorum-XXX.log of the new validator node will have the following error message:

ERROR[12-19|12:25:05] Failed to decode message from payload    address=0x59d9F63451811C2c3C287BE40a2206d201DC3BfF err="unauthorized address"

This is because the rest of the validators in the network have not yet accepted the node as a validator. To request such acceptance we must take note of the node's address(0x59d9F63451811C2c3C287BE40a2206d201DC3BfF).

Proposing a new validator node

  • Once the files have been modified, you must do a pull request against this repository

  • If it's validator node and it has an unauthorized address, then you must indicate this address in the pull request.

  • To include this node as a validator node, the administrators of the other validator nodes must use the RPC API to vote if they agree with your node becoming a validator node.

> istanbul.propose("0x59d9F63451811C2c3C287BE40a2206d201DC3BfF", true);

or

$ cd /lacchain/data
$ geth --exec 'istanbul.propose("0x59d9F63451811C2c3C287BE40a2206d201DC3BfF",true)' attach geth.ipc

Thus, the new node will be raised and synchronized with the network if and only if over 50% of the validator nodes vote in your nodes favor.

NEVER MAKE A PROPOSAL WITHOUT FIRST UPDATING THE FILES MENTIONED IN: "Configuring the Quorum node file",after updating the files, you need to run the command:

ansible-playbook -i inventory -e validator=yes -e regular=no --private-key=~/.ssh/id_rsa -u adrian site-lacchain-update.yml

A VALIDATOR NODE MUST NEVER BE ELIMINATED WITHOUT PROPOSING THE REMOVAL THROUGH A PULL REQUEST SO THAT THE REST OF THE VALIDATING MEMBERS WILL REMOVE THEM FROM THEIR FILES (PERMISSIONED-NODES.JSON, STATIC-NODES.JSON) FIRST AND THEN PROCEEED TO A VOTING ROUND:

> istanbul.propose("0x59d9F63451811C2c3C287BE40a2206d201DC3BfF", false);

or

$ cd /lacchain/data
$ geth --exec 'istanbul.propose("0x59d9F63451811C2c3C287BE40a2206d201DC3BfF",true)' attach geth.ipc

Node Operation

  • Faced with errors in the node, we can choose to perform a restart of the node, for this we must execute the following commands:
<remote_machine>$ systemctl restart constellation
<remote_machine>$ systemctl restart geth
  • The next statement allows you to back up the node's state. It also makes a backup copy of the keys and the enode of your node. All backup copies are stored in the home directory as ~/lacchain-keysBackup.
$ ansible-playbook -i inventory -e validator=true --private-key=~/.ssh/id_rsa -u vagrant site-lacchain-backup.yml 

NOTE If we want to generate the node using an enode and the keys of an existing node we must make a backup of the keys of the old node:

$ ansible-playbook -i inventory -e validator=true --private-key=~/.ssh/id_rsa -u vagrant site-lacchain-backup.yml 

This will generate the folder ~/lacchain-keysBackup whose contents should be moved to ~/lacchain/data/keys. The keys of this directory (which has to keep the folder structure of the generated backup) will be the ones used in the image of the node that we are going to generate.

Disclaimer test-net

The public-permissioned LACChain network offered by LACChain is currently at a test-net stage. The access to this network is described in this GitHub. The LACChain team is currently defining a road-map with both technology and legal requirements to release networks in production. From now on, we will refer to the LACChain test-net as ¨The Network¨.

Any natural or legal person that uses or operates The Network becomes a User. The User agrees to these terms and acknowledges that The Network is at an early stage of development. The User acknowledges and accepts that the use of The Network is entirely at the User’s sole risk and discretion.

To use The Network, The User must be authenticated, guaranteeing that every regular/access node operating The Network is associated to a physical or legal person. Every regular node must indicate the contact information of the natural person that is responsible and accountable for the operation of the regular node, including name and e-mail.

The User acknowledges and agrees that has an adequate understanding of the blockchain technology and the programming languages. The User also understands the risks associated with the use of The Network, which could present interruptions or malfunctions as it is at a est-net stage.

The user understands that all the information and materials published, distributed or, otherwise, made available on The Network are provided with no guarantee by the Allies, Members and Users of the LACChain program. Despite The Allies will struggle to offer robust technology, at this est-net stage they are not responsible nor accountable for the reliability of The Network.

All the content related to The Network is provided on an ‘as is’ and ‘as available’ basis, without any representations or warranties of any kind. All implied terms are excluded to the fullest extent permitted by law. No party involved in, or having contributed to the development of The Network, including but not limited to IDB, IDB lab, everis, ConsenSys, NTT Data, io.builders, LegalBlock and any of their affiliates, directors, employees, contractors, service providers or agents (The Parties Involved) accept any responsibility or liability to Users or any third parties in relation to any materials or information accessed or downloaded via The Network.

The User acknowledges and agrees that all the Parties Involved are not responsible for any damage to the User’s computer systems, loss of data, or any other loss or damage resulting from the use of The Network.

To the fullest extent permitted by law, in no event shall The Parties Involved have any liability whatsoever to any person for any direct or indirect loss, liability, cost, claim, expense or damage of any kind, whether in contract or in tort, including negligence, or otherwise, arising out of or related to the use of all or part of The Network.

The User understands and accepts that none of the physical and/or legal persons operating a core (validator or boot) node in The Network and, therefore, being part of the consensus protocol, is legally committed to maintain those nodes. When a percentage of these nodes are suddenly turned off, the network may cause iinterruptions of service. The User operating a regular/access node and any third party using the network through The User´s regular/access nodes understands that LACChain will not be either responsible nor accountable for any malfunction or damage caused by the disconnections of the core nodes.

The User will not run any application or solution in The Network when The Network is a necessary component for the either application or the solution.

The User will not run any application or solution nor register any information or data in The Network that incurs illegal activities or practices that can be considered against the Law.

The User must preserve the data privacy of all the physical and legal persons whose information is directly or indirectly registered in The Network through The User´s regular node.

The User is not allowed to promote the use of The Network for commercial or institutional purposes without including the following disclaimer:

All the operation and use of the LACChain test-net blockchain network by ________ (The User) is carried out under the entire responsibility and discretion of ________ (The User). Under no circumstance, any responsibility or accountability is extended to any other physical or legal person operating the network, or the community behind LACChain. The LACChain blockchain network is currently at a test-net stage and, therefore, it must not be used for applications or solutions in production, or for monetization purposes. The expected use of the network is for testing and learning purposes, as there could be occasional malfunctioning or interruption of service. Over the coming months, LACChain will develop and incorporate the legal policies, the governance frameworks and the technology upgrades for the release of the pre main-net and the main-net that will enable the operation in production.

   

LICENSE

Creative Commons License

This work is licensed under a license

besu-pro-testnet's People

Contributors

allendemarcos avatar antonio-leal-batista avatar ccamaleon5 avatar davux avatar diega avatar edumar111 avatar elranu avatar eum602 avatar mdelrociomm avatar turupawn avatar victorns69 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

besu-pro-testnet's Issues

Excesive filesystem usage

Filesystem of deployed node is filled unusually. This causes error "No space left on device" when trying to perform some store action on the VM where the node is mounted.

TLS - Inter Orion communication failure when new IP is assigned to any of them

When assigning a new IP to an Orion node (let's say Node A), which has already established connections with other orion nodes (Node B, Node C) with mutual authentication, then the communication does not succeeds when the updating process of the orion_ca.pem file in the other orion nodes (Nodes "B" and "C") is not totally made.
For instance, let's say that only node "B" updates its orion.conf file with the new Node "A" IP address; In that case communication between "A" and "B" won't succeed because node "C" still advertises the old IP address of node "A".

The only way I found as a workaround is that all other nodes ("B" and "C") should update their orion.conf file.

This issue has a huge impact especially in scenarios where lots of orion nodes have complex interactions.

Max Limit Peers

Problem

When the node reaches the maximum connection limit (by default 25), it begins to disconnect. This affects performance while the node is disconnecting from the other peers.

Solution

Our recommendation for these cases is to limit connections only to bootnodes, which can be achieved as follows:
In the start-pantheon.sh file located in the / lacchain folder add the flag
--permissions-nodes-config-file-enabled

With which the file would look like this:

#! / bin / bash
LOG4J_CONFIGURATION_FILE=/root/lacchain/log.xml pantheon --data-path /root/lacchain/data --genesis-file=/root/lacchain/data/genesis.json --network-id 648529 --permissions-nodes- config-file-enabled --permissions-nodes-contract-enabled --permissions-nodes-contract-address=0x0000000000000000000000000000000000009999 --config-file=/root/lacchain/config.toml

Additionally in the permissions_config.toml file located in the /lacchain/data folder add the bootnodes located in config.toml
´ nodes-whitelist = [ bootnodes list from config.toml ]´

Finally restart the besu service
service pantheon restart

Separate data and configuration files

Current context:
Currently configuration files, database and logs are located on the same directory.

According to standard practices, variable data should be put separately (link)

For example blockchain database can be put on a separated disk mounted on an appropriate location like /var/cache/besu.

desynchronization due to Orion

Problem

It has been observed that when the limit gas is set to execute a private transaction, the desynchronization failure also occurs (Tested in version 1.4.0 and besu 1.3.6).

The problem was detected in nodes that had orion and besu running

  • It was identified that the bootnodes that had orion and besu running in the same environment caused other nodes on the network that also had orion and besu running to get out of sync.
  • Nodes that only ran besu were not affected.

workaround:

In case Orion is not necessary

Disable the orion service in the bootnodes configuration and restart the "pantheon" service
a) Go to the file /root/lacchain/config.toml and comment the following lines:

privacy-enabled = true
privacy-url = "http://127.0.0.1:4444"
privacy-public-key-file = "/root/lacchain/orion/keystore/nodeKey.pub" 

b) Then restart the pantheon service:
systemctl restart pantheon

In case Orion is necessary

a) Go to the file /root/lacchain/config.toml and comment the following lines:

#privacy-enabled = true
#privacy-url = "http://127.0.0.1:4444"
#privacy-public-key-file = "/root/lacchain/orion/keystore/nodeKey.pub"`

b) Then restart the pantheon service:
systemctl restart pantheon
c) Wait for the besu node to finish synchronizing
d) Once the node is synchronized, return to the /root/lacchain/config.toml file and uncomment the lines that were commented in step "a"
e) Step followed restart the pantheon service
systemctl restart pantheon
Verify that the node is already properly synchronized.

Centos users cannot interact with besu through nginx in some smart contract deployments

Context: Using nginx(https) to securely access a node thorugh RPC
OS : Centos7
Problem: Usually users do not have problems when performing simple transactions, but sometimes users cannot deploy some smart contract.
Possible root cause: The problem seems to be related to the user assigned in the ngonx.conf, The current assgined user is "nobody", this user do not have enough permissions to access temporal files in /var/lib/nginx.

Possible solutions: Change te assgined user in /etc/nginx/nginx.conf ==> so user "nignx" should be shown there.

Private Transactions "not found" after incoming external private marker transaction caused errors in Besu Node

environment

  • I have 2 pair of nodes (orion-besu) configured to send private transactions
    • Node A: Besu Node A, Orion Node A
    • Node B: Besu Node B, Orion Node B

VMs: On Google Cloud

  • Node A:
    • Besu Node A: 2vcpu, 8GB RAM
    • Orion Node A: 2vcpu, 8GB RAM
  • Node B:
    • Besu Node B: 2vcpu, 8GB RAM
    • Orion Node B: 2vcpu, 8GB RAM

Scenario:

  • I shutdown Node A, I mean, Besu Node A, Orion Node A
  • I shutdown Orion Node B
  • I didn't disable the orion flags on Besu node B
  • after some days an incoming private marker transaction (TX "n") arrived via p2p to Besu Node B
  • Naturally an error appeared in the "ERROR" level logs, stating the Orion Node B is not reachable:
    2021-09-23T15:59:05.658+0000 ERROR Can not communicate with enclave is it up? org.hyperledger.besu.enclave.EnclaveIOException: Enclave Communication Failed at org.hyperledger.besu.enclave.VertxRequestTransmitter.sendRequest(VertxRequestTransmitter.java:88) ~[enclave-20.10.2.jar:?] at org.hyperledger.besu.enclave.VertxRequestTransmitter.post(VertxRequestTransmitter.java:42) ~[enclave-20.10.2.jar:?] at org.hyperledger.besu.enclave.Enclave.post(Enclave.java:150) ~[enclave-20.10.2.jar:?] at org.hyperledger.besu.enclave.Enclave.receive(Enclave.java:81) ~[enclave-20.10.2.jar:?] at org.hyperledger.besu.ethereum.mainnet.precompiles.privacy.PrivacyPrecompiledContract.getReceiveResponse(PrivacyPrecompiledContract.java:235) ~[besu-core-20.10 .2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.precompiles.privacy.PrivacyPrecompiledContract.compute(PrivacyPrecompiledContract.java:115) ~[besu-core-20.10.2.jar:20.1 0.2] at org.hyperledger.besu.ethereum.mainnet.MainnetMessageCallProcessor.executePrecompile(MainnetMessageCallProcessor.java:133) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.MainnetMessageCallProcessor.start(MainnetMessageCallProcessor.java:61) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.AbstractMessageProcessor.process(AbstractMessageProcessor.java:163) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.MainnetTransactionProcessor.process(MainnetTransactionProcessor.java:440) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.MainnetTransactionProcessor.processTransaction(MainnetTransactionProcessor.java:354) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.AbstractBlockProcessor.processBlock(AbstractBlockProcessor.java:147) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.AbstractBlockProcessor.processBlock(AbstractBlockProcessor.java:40) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.PrivacyBlockProcessor.processBlock(PrivacyBlockProcessor.java:96) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.BlockProcessor.processBlock(BlockProcessor.java:61) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.MainnetBlockValidator.validateAndProcessBlock(MainnetBlockValidator.java:96) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.mainnet.MainnetBlockImporter.importBlock(MainnetBlockImporter.java:45) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.core.BlockImporter.importBlock(BlockImporter.java:44) ~[besu-core-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.eth.sync.fullsync.FullImportBlockStep.accept(FullImportBlockStep.java:59) ~[besu-eth-20.10.2.jar:20.10.2] at org.hyperledger.besu.ethereum.eth.sync.fullsync.FullImportBlockStep.accept(FullImportBlockStep.java:31) ~[besu-eth-20.10.2.jar:20.10.2] at org.hyperledger.besu.services.pipeline.CompleterStage.run(CompleterStage.java:37) ~[besu-pipeline-20.10.2.jar:20.10.2] at org.hyperledger.besu.services.pipeline.Pipeline.lambda$runWithErrorHandling$3(Pipeline.java:130) ~[besu-pipeline-20.10.2.jar:20.10.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.util.concurrent.ExecutionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.128.0.12:4444
  • In the "INFO" level log of that node a critical error appeared:

    2021-09-23T15:59:08.304+0000 INFO Saving announced block 30347286 (0x1b5ebbbce65885bb013d87e0ec4d064cf3f2db9c13a934efc6d7654a38d9bfde) for future import
    2021-09-23T15:59:10.138+0000 INFO Saving announced block 30347287 (0x1f4aa10b620bec7a6e47049acfad18637382b1e4797f33683ab232aac2549d83) for future import
    2021-09-23T15:59:12.186+0000 INFO Saving announced block 30347288 (0xf4b460d182945b46c632ee50e9b9f7f091560bdd30c01677b24dba61b173b216) for future import
    2021-09-23T15:59:14.036+0000 INFO Block processing error: transaction invalid 'INTERNAL_ERROR'. Block 0xbf2141be6b8441cd01e11dcd89835c9aa0829d8a2de4555052bcf9faab93d1a5 Transaction 0x86f5cd379b4927da5ac3b6de4107d03270cf9dba4fbcf6c13f8208d18bc5e0f9
    2021-09-23T15:59:14.145+0000 INFO Saving announced block 30347289 (0x5ba574adbc7304a4a83ae6ea698f3e7fa0f176b8bb7cb372409245a10ce04bf5) for future import
    2021-09-23T16:28:06.344+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:28:11.345+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:28:11.545+0000 INFO No sync target, waiting for peers: 1
    2021-09-23T16:47:16.687+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:47:21.688+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:47:26.689+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:47:28.742+0000 INFO No sync target, waiting for peers: 1
    2021-09-23T16:54:58.785+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T16:54:59.642+0000 INFO No sync target, waiting for peers: 1
    2021-09-23T17:21:09.797+0000 INFO No sync target, waiting for peers: 0
    2021-09-23T17:21:14.799+0000 INFO No sync target, waiting for peers: 0

  • After that Besu Node B stopped synchronizing even when it has other peer connections
  • They way I found to make it start synchronizing again was disabling the orion flags and restarting the Besu Node B again.
  • After the Besu Node B finished the synchronization process I activated again the orion flag in the Besu Node B as well as turning on the Orion Node B.
  • I started Besu Node A (the orion flags were activated but the Orion Node A wasn't)
  • During the synchronization the Besu Node A reached the imported block where the TX "n" was included. Because of the fact that the node had orion flags activated but not able to communicate to Orion Node A, since it was not running; the Besu Node A showed the following critical message:

    2021-09-24T04:44:12.601+0000 INFO Import reached block 30346400 (0x7003..3297), 0.868 Mg/s, Peers: 6
    2021-09-24T04:44:40.652+0000 INFO Import reached block 30346600 (0xaf67..a314), 0.904 Mg/s, Peers: 6
    2021-09-24T04:45:12.061+0000 INFO Import reached block 30346800 (0xbf90..88c7), 0.953 Mg/s, Peers: 6
    2021-09-24T04:45:43.858+0000 INFO Import reached block 30347000 (0x7690..704b), 0.950 Mg/s, Peers: 6
    2021-09-24T04:46:16.077+0000 INFO Import reached block 30347200 (0xe63f..ffae), 0.991 Mg/s, Peers: 6
    2021-09-24T04:46:34.631+0000 INFO Block processing error: transaction invalid 'INTERNAL_ERROR'. Block 0xbf2141be6b8441cd01e11dcd89835c9aa0829d8a2de4555052bcf9faab93d1a5
    Transaction 0x86f5cd379b4927da5ac3b6de4107d03270cf9dba4fbcf6c13f8208d18bc5e0f9
    2021-09-24T04:46:37.039+0000 INFO Found common ancestor with peer 0xfe50d1c3d1ebbc37cd... at block 30347272
    2021-09-24T04:46:38.432+0000 INFO Block processing error: transaction invalid 'INTERNAL_ERROR'. Block 0xbf2141be6b8441cd01e11dcd89835c9aa0829d8a2de4555052bcf9faab93d1a5 Transaction 0x86f5cd379b4927da5ac3b6de4107d03270cf9dba4fbcf6c13f8208d18bc5e0f9
    2021-09-24T04:46:40.603+0000 INFO Found common ancestor with peer 0xb97f1b94e3a5e78de9... at block 30347272
    2021-09-24T04:46:41.963+0000 INFO Block processing error: transaction invalid 'INTERNAL_ERROR'. Block 0xbf2141be6b8441cd01e11dcd89835c9aa0829d8a2de4555052bcf9faab93d1a5 Transaction 0x86f5cd379b4927da5ac3b6de4107d03270cf9dba4fbcf6c13f8208d18bc5e0f9
    2021-09-24T04:46:44.121+0000 INFO Found common ancestor with peer 0x140626be59e4f2c57a... at block 30347272
    2021-09-24T04:46:45.366+0000 INFO Block processing error: transaction invalid 'INTERNAL_ERROR'. Block 0xbf2141be6b8441cd01e11dcd89835c9aa0829d8a2de4555052bcf9faab93d1a5 Transaction 0x86f5cd379b4927da5ac3b6de4107d03270cf9dba4fbcf6c13f8208d18bc5e0f9
    2021-09-24T04:47:22.383+0000 INFO No sync target, waiting for peers: 1
    2021-09-24T04:47:24.585+0000 INFO No sync target, waiting for peers: 2
    2021-09-24T05:00:44.650+0000 INFO No sync target, waiting for peers: 1
    2021-09-24T05:00:49.651+0000 INFO No sync target, waiting for peers: 1
    2021-09-24T05:00:54.652+0000 INFO No sync target, waiting for peers: 1
    2021-09-24T05:00:55.859+0000 INFO No sync target, waiting for peers: 2
    2021-09-24T05:10:15.905+0000 INFO No sync target, waiting for peers: 1
    2021-09-24T05:10:20.905+0000 INFO No sync target, waiting for peers: 1

  • They way I found to make it start synchronizing again was disabling the orion flags and restarting the Besu Node A again.
  • After the Besu Node A finished the synchronization process I activated again the orion flag in the Besu Node A as well as turning on the Orion Node A.
  • All seemed good, but after trying to send some private transaction between Nodes A and B the node returned the following message, like if the node wasn't able to find some private state which was already created before all this issue happened.
    Caused by: org.web3j.tx.exceptions.ContractCallException: Empty value (0x) returned from contract at org.web3j.tx.Contract.executeCallSingleValueReturn(Contract.java:313) at org.web3j.tx.Contract.lambda$executeRemoteCallSingleValueReturn$1(Contract.java:397) at org.web3j.protocol.core.RemoteCall.send(RemoteCall.java:42) at com.everis.blockchain.cadena.Cadena.getVisibility(Cadena.java:135) at com.cadena.provider.config.blockchain.ApplicationBlockChain.setVissibleRMA(ApplicationBlockChain.java:156) at com.cadena.provider.config.blockchain.ApplicationBlockChain.instanceICadena(ApplicationBlockChain.java:144) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ... 28 common frames omitted


    note: I noticed that I was still able to create new privacyGroups and even deploy new private smart contracts on the newly created privacy group, but totally unable to recover past states.

Issues when installing Besu nodes behind proxy environment

When nodes are behind http proxy it is necessary to add previous steps to deal with that issue:
Configuring environment variables:

export HTTPS_PROXY="http://YOUR_PROXY_HOST:YOUR_PROXY_PORT"
export HTTP_PROXY="http://YOUR_PROXY_HOST:YOUR_PROXY_PORT"
export http_proxy="http://YOUR_PROXY_HOST:YOUR_PROXY_PORT"
export https_proxy="http://YOUR_PROXY_HOST:YOUR_PROXY_PORT"

export _JAVA_OPTIONS='-Dhttp.proxyHost=YOUR_PROXY_HOST -Dhttp.proxyPort=YOUR_PROXY_PORT -Dhttps.proxyHost=YOUR_PROXY_HOST -Dhttps.proxyPort=YOUR_PROXY_PORT'

If ansible has failed throwing docker issues then try to do this:

mkdir -p /etc/systemd/system/docker.service.d
#Create a http-proxy.conf file and add the following:
vi /etc/systemd/system/docker.service.d/http-proxy.conf
***
[Service]
Environment="HTTP_PROXY=http://192.168.56.85:3128"
***
#Similar with this file:
vi /etc/systemd/system/docker.service.d/https-proxy.conf
**
[Service]
Environment="HTTPS_PROXY=http://192.168.56.85:3128"
**

#Finally reload daemon and restart docker service:
systemctl daemon-reload
systemctl restart docker

In order to verify that docker environment variables where correctly configured run this:
systemctl show --property=Environment docker

TASK [lacchain-orion-node : Download Orion Binaries] *******************************************************************fatal: [23.23.107.181]: FAILED! => {"changed": false, "dest": "/tmp/orion.tar.gz", "elapsed": 0, "msg": "Request failed", "response": "HTTP Error 403: Forbidden", "status_code": 403, "url": "https://bintray.com/consensys/binaries/download_file?file_path=orion-1.6.0.tar.gz"}

Hi guys.

I'm trying to launch the playbook site-lacchain-orion.yml, but the TASK [lacchain-orion-node : Download Orion Binaries] show me this error " fatal: [23.23.107.181]: FAILED! => {"changed": false, "dest": "/tmp/orion.tar.gz", "elapsed": 0, "msg": "Request failed", "response": "HTTP Error 403: Forbidden", "status_code": 403, "url": "https://bintray.com/consensys/binaries/download_file?file_path=orion-1.6.0.tar.gz"}"
image

When i check the URL, i don't' receive nothing.

image

Also, when i try to download the source and compile from here https://docs.orion.consensys.net/en/stable/HowTo/Build-From-Source/ in Amazon Linux 2 (Centos) with Corretto we can't compile, the error is "Could not initialize class org.codehaus.groovy.runtime.InvokerHelper"

image

And well, reading the README file, i view this:

image

So, i don't have more ideas, please could you help me with this?

Getting an error /tmp/....so: failed to map segment from shared object

During installation process ansible throws an error stating that it is not possible to create public/private keys for orion.

This happens because java needs to load some executable files to /tmp and sometimes OS does not have allowed executable files for /tmp.

To solve this you have to allow executable permissions to /tmp folder. A suggest to do that is the following:

  • vi /etc/fstab
  • find all the references to /tmp and make sure it does NOT have "nonexec", so at the end your /tmp reference would look something like this:
/tmp    xfs     defaults,nodev,nosuid,exec

/tmp /var/tmp none rw,exec,nosuid,nodev,bind 0 0

PD: This problem also applies to Besu if /tmp does not have proper permissions.

Users don't know what their enode is

It would be grateful to have a file where the enode is specified.

Manually users can know their enode by executing the following steps:

  • Enter to your remote machine via ssh:
  • Enter as root: sudo -i
  • Execute: key=$(pantheon --data-path=/root/lacchain/data public-key export-address --to=/root/lacchain/data/nodeAddress | grep -oE "0x[A-Fa-f0-9]*" | sed 's/0x//');ip=$(dig +short myip.opendns.com @resolver1.opendns.com 2>/dev/null || curl -s --retry 2 icanhazip.com);port=60606; echo "enode://${key}@${ip}:${port}" > /root/lacchain/data/enode
  • Now easily print your enode: cat /root/lacchain/data/enode

Users do not see their node on http://ethstats.lacchain.io/

After installing a node using ansible scripts, users are not able to see their running node on http://ethstats.lacchain.io/ , it would be great if a task in ansible could check if the service related to ethstats is running.

Manually users can solve this issue by running the following commands:

  • Enter to the remote VM => ssh
  • Enter as root => sudo -i
  • Check if docker related with ehtsats is running: docker ps
  • If you do not see any docker related to ethstats, then you can try to recreate the docker by running:
  • node_name=node_name_you_used_in_inventory;node_email=email_you_entered_in_inventory; mkdir -p /opt/ethstats-cli && docker run -d --restart always --net host -v /opt/ethstats-cli/:/root/.config/configstore/ alethio/ethstats-cli --register --account-email ${node_email} --node-name ${node_name} --server-url http://ethstats.lacchain.io:3000 --client-url ws://127.0.0.1:4546

make sure to replace node_name and node_email with your custom values

  • Finally make sure docker related to ethstats is running: docker ps

Add Path to ansible scripts

Is necessary add PATH as variable before to deploy a new node. This due to some organizations deploy nodes on different disks.

Network is halted when updating validator nodes

The problem comes in the following scenario: I have an ibft2 network which is under onchain permissioning.
At the beggining all works well but if for some reason the network stops validating blocks(eg. some validators >1/3 n;n=#total_validators;go offline) then when trying to start the network again by starting the offline validators then the network does not start validating blocks.
The way I found to start the network is by restarting validators without:

--permissions-accounts-contract-enabled
--permissions-accounts-contract-address=0x0000000000000000000000000000000000008888
--permissions-nodes-contract-enabled
--permissions-nodes-contract-address=0x0000000000000000000000000000000000009999

Then the network started without permissioning.

Finally I procceded restarting, gradually in order to not to loose connection between validators. But this time each node was started with the permissioning flags.

If the network fails, then the validators should be able to start again with onchain permissioning without any extra step.

Follow this issue on Hyperledger Besu Rocket Chat

Orion Bootnodes do not work properly

Orion bootnodes are not working properly.
Scenario 1:
Orion Node "A" points to Orion Node "a" as bootnode.
Orion Node "B" points to Orion Node "a" as bootnode.

If orion node "A" tries to send a message to orion node "B" then the message is NOT received by
If orion node "A" sends a message to orion bootnode "a" then it is received.
Node "B".

========================
Scenario 2:
Orion Node "A" points to Orion Node "a" and Orion Node "b" as bootnodes.

If Orion Node "A" sends a message to Orion bootnode "a" then the message is received.
If Orion Node "A" sends a message to Orion bootnode "b" then the message is NOT received.

======================
If explained scenarios are tested with different bootnodes other than the 4 bootnodes in the Lacchain architecture, then all works as expected.

Install Orion in a separate instance

In order to be aligned to the latest updates in the official documentation of Besu/Orion; it is recommended to install Orion in a separate instance(Independent of Besu Node). It would be useful to provide separate playbook (for Orion) for partners who want to try Orion.

Migrate Besu/Orion node to a new VM

It would be useful to have scripts which allows users to migrate a besu node to a new VM.
Basically to perform this it is necessary to save the private key of BESU and ORION and also private database.

Install openjdk instead of Oracle JDK

In order to increase the automation and be aligned to the suggested Consensys-Qorum recommendations a switch to Java openjdk is suggested.
This change will avoid manual download of java oracle jdk before running the ansible scripts.

Configure Orion and Besu to write and rotate logs properly

Currently:

  • Besu service appends logs indefinitely to lacchain/logs/ folder
  • Orion service writes logs to stdout, which goes to /var/log/syslog by default

For both services:

  • logs should be written to a log file
  • logs should be rotated periodically

Use LACChain node taxonomy

Documentation currently uses a node taxonomy based on validator nodes, regular nodes and bootnodes. LACChain however uses a different taxonomy, so that one should be used instead in documentation and scripts.

Ethstats does not show my node

Problem

Node does not appear in ethstats.

Solution

List all containers
docker ps -a

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                     PORTS               NAMES
6be3c7a06fb0        alethio/ethstats-cli   "./bin/ethstats-cli.…"   7 hours ago         Exited (1) 2 minutes ago                       ecstatic_sanderson

Remove current Ethstats client
docker rm -f <CONTAINER_ID>

Remove current private key. A new one is automatically generated
rm -rf /opt/ethstats-cli/ethstats-cli.json

Run ethstats-cli docker container

docker run -dit --restart always --net host -v /opt/ethstats-cli/:/root/.config/configstore/ alethio/ethstats-cli --register --account-email "<PUT_YOUR_EMAIL>" --node-name "<PUT_YOUR_NODE_NAME>" --server-url http://35.236.236.77:3000 --client-url ws://127.0.0.1:4546

Generate new certificates or renew old ones

It would be useful if a user had a way to generate new certificates o renew old ones when those are expired.

As a workaround, the following steps can be followed:

  1. Login with sudo:
sudo -i
  1. move the orion certificates folder to some other location for example:
mv /root/lacchain/orion/certificates/  ~
  1. run the ansible installer in order to update the writer node - please replace the private key and remote user with your right credentials, an also take into consideration the orion and besu versions you are using in your private business orion network and also the besu version configured in the inventory file (located in your local repository)
ansible-playbook -i inventory --private-key=~/.ssh/id_ecdsa -u remote_user site-lacchain-update-writer.yml 

After finishing the installing process, a new certificate folder is created into /root/lacchain/orion

  1. Now execute the following:
cp -r ~/certificates/CAs/* /root/lacchain/orion/certificates/CAs/
  1. Restart your Orion node with:
systemctl restart orion
  1. Verify all is running correctly with
systemctl status orion

Java 11 can't be installed automatically anymore

Oracle doesn't let people download JDK 11 automatically. Now you need to download the .tar.gz file from the website and deploy it on the machine.

The documentation and the Ansible code should be updated to reflect that.

Node gets stuck in block 9,700,000

Problem

The node is unable to sync and is stuck at block 9700000. The problem occurs in versions of Besu lower than 1.5.1.

Solution

Update besu to versions higher than 1.5.1. The detail of the problem and solution is detailed besu-1149

Tests after each deployment/update on a node

It would be a good practice to make some test after an deployment/update is made on an existing node.
Test could include the capability to check if the node is able to communicate with other besu/orion nodes, also testing over private transactions and so on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.