Giter Club home page Giter Club logo

ethereum-validators-monitoring's Introduction

๐Ÿข ethereum-validators-monitoring (aka balval)

Consensus layer validators monitoring bot, that fetches Lido or Custom Users Node Operators keys from Execution layer and checks their performance in Consensus layer by: balance delta, attestations, proposes, sync committee participation.

Bot has two separate working modes: finalized and head for fetching validator info, writes data to Clickhouse, displays aggregates by Grafana dashboard, alerts about bad performance by Prometheus + Alertmanger and routes notifications to Discord channel via alertmanager-discord.

Working modes

You can switch working mode by providing WORKING_MODE environment variable with one of the following values:

finalized

Default working mode. The service will fetch validators info from finalized states (the latest finalized epoch is 2 epochs back from head). It is more stable and reliable because all data is already finalized.

Pros:

  • No errors due to reorgs
  • Less rewards calculation errors
  • Accurate data in alerts and dashboard

Cons:

  • 2 epochs delay in processing and critical alerts will be given with 2 epochs delay
  • In case of long finality the app will not monitor and will wait for the finality

head

Alternative working mode. The service will fetch validators info from non-finalized states. It is less stable and reliable because of data is not finalized yet. There can be some calculation errors because of reorgs.

Pros:

  • Less delay in processing and critical alerts will be given with less delay
  • In case of long finality the app will monitor and will not wait for the finality

Cons:

  • Errors due to reorgs
  • More rewards calculation errors
  • Possible inaccurate data in alerts and dashboard

Dashboards

There are three dashboards in Grafana:

  • Validators - shows aggregated data about performance for all monitored validators Validators
  • NodeOperator - shows aggregated data about performance for each monitored node operator NodeOperators
  • Rewards & Penalties - shows aggregated data about rewards, penalties, and missed rewards for each monitored node operator Rewards & Penalties

Alerts

There are several default alerts which are triggered by Prometheus rules:

  • General:
    • ๐Ÿ”ช Slashed validators
    • ๐Ÿ’ธ Operators with negative balance delta
  • Proposals:
    • ๐Ÿ“ฅ Operators with missed block propose
    • ๐Ÿ“ˆ๐Ÿ“ฅ Operators with missed block propose (on possible high reward validators)
  • Sync:
    • ๐Ÿ”„ Operators with bad sync participation
    • ๐Ÿ“ˆ๐Ÿ”„ Operators with bad sync participation (on possible high reward validators)
  • Attestations:
    • ๐Ÿ“โŒ Operators with missed attestation
    • ๐Ÿ“๐Ÿข Operators with high inc. delay attestation
    • ๐Ÿ“๐Ÿท๏ธ Operators with two invalid attestation property (head/target/source)
    • ๐Ÿ“ˆ๐Ÿ“โŒ Operators with missed attestation (on possible high reward validators)

First run

You have two options to run this application: docker-compose or node and two sources of validator list: lido (by default) or file (see here).

Because Lido contract on mainnet contains a lot of validators, fetching and saving them to local storage can take time (depends on EL RPC host) and a lot of RAM. For avoiding heap out of memory error, you can pass NODE_OPTIONS env var with --max-old-space-size=8192 value and when the application completes its first cycle, you can restart your instance without this env variable.

Run via docker-compose

  1. Use .env.example.compose file content to create your own .env file
  2. Build app image via docker-compose build app
  3. Set owner for validators registry sources
chown -R 1000:1000 ./docker/validators
  1. Create .volumes directory from docker directory:
cp -r docker .volumes
chown -R 65534:65534 .volumes/prometheus
chown -R 65534:65534 .volumes/alertmanager
chown -R 472:472 .volumes/grafana
  1. Run docker-compose up -d
  2. Open Grafana UI at http://localhost:8082/ (login: admin, password: MYPASSWORT) and wait first app cycle execution for display data

Run via node

  1. Install dependencies via yarn install
  2. Run yarn build
  3. Tweak .env file from .env.example.local
  4. Run Clickhouse to use as bot DB
docker-compose up -d clickhouse
  1. Set owner for validators registry sources
chown -R 1000:1000 ./docker/validators
  1. Run yarn start:prod

Use custom validators list

By default, monitoring bot fetches validator keys from Lido contract, but you can monitor your own validators:

  1. Set VALIDATOR_REGISTRY_SOURCE env var to file
  2. Create file with keys by example here
  3. Set VALIDATOR_REGISTRY_FILE_SOURCE_PATH env var to <path to your file>

If you want to implement your own source, it must match RegistrySource interface and be included in RegistryModule providers

Clickhouse data retention

By default, storage keep the data with Inf. time to live. It can be changed by the TTL policy for Clickhouse:

# Mainnet
ALTER TABLE validators_summary MODIFY TTL toDateTime(1606824023 + (epoch * 32 * 12)) + INTERVAL 3 MONTH;

# Holesky
ALTER TABLE validators_summary MODIFY TTL toDateTime(1695902400 + (epoch * 32 * 12)) + INTERVAL 3 MONTH;

# Goerli
ALTER TABLE validators_summary MODIFY TTL toDateTime(1616508000 + (epoch * 32 * 12)) + INTERVAL 3 MONTH;

Application Env variables


LOG_LEVEL - Application log level.

  • Required: false
  • Values: error / warning / notice / info / debug
  • Default: info

LOG_FORMAT - Application log format.

  • Required: false
  • Values: simple / json
  • Default: json

WORKING_MODE - Application working mode.

  • Required: false
  • Values: finalized / head
  • Default: finalized

DB_HOST - Clickhouse server host.

  • Required: true

DB_USER - Clickhouse server user.

  • Required: true

DB_PASSWORD - Clickhouse server password.

  • Required: true

DB_NAME - Clickhouse server DB name.

  • Required: true

DB_PORT - Clickhouse server port.

  • Required: false
  • Default: 8123

HTTP_PORT - Port for Prometheus HTTP server in application on the container.

  • Required: false
  • Default: 8080
  • Note: if this variable is changed, it also should be updated in prometheus.yml

EXTERNAL_HTTP_PORT - Port for Prometheus HTTP server in application that is exposed to the host.

  • Required: false
  • Default: HTTP_PORT

DB_MAX_RETRIES - Max retries for each query to DB.

  • Required: false
  • Default: 10

DB_MIN_BACKOFF_SEC - Min backoff for DB query retrier (sec).

  • Required: false
  • Default: 1

DB_MAX_BACKOFF_SEC - Max backoff for DB query retrier (sec).

  • Required: false
  • Default: 120

DRY_RUN - Run application in dry mode. This means that it runs a main cycle once every 24 hours.

  • Required: false
  • Values: true / false
  • Default: false

NODE_ENV - Node.js environment.

  • Required: false
  • Values: development / production / staging / testnet / test
  • Default: development

ETH_NETWORK - Ethereum network ID for connection execution layer RPC.

  • Required: true
  • Values: 1 (Mainnet) / 5 (Goerli) / 17000 (Holesky)

EL_RPC_URLS - Ethereum execution layer comma-separated RPC URLs.

  • Required: true

CL_API_URLS - Ethereum consensus layer comma-separated API URLs.

  • Required: true

CL_API_RETRY_DELAY_MS - Ethereum consensus layer request retry delay (ms).

  • Required: false
  • Default: 500

CL_API_GET_RESPONSE_TIMEOUT - Ethereum consensus layer GET response (header) timeout (ms).

  • Required: false
  • Default: 15000

CL_API_MAX_RETRIES - Ethereum consensus layer max retries for all requests.

  • Required: false
  • Default: 1 (means that request will be executed once)

CL_API_GET_BLOCK_INFO_MAX_RETRIES - Ethereum consensus layer max retries for fetching block info. Independent of CL_API_MAX_RETRIES.

  • Required: false
  • Default: 1 (means that request will be executed once)

FETCH_INTERVAL_SLOTS - Count of slots in Ethereum consensus layer epoch.

  • Required: false
  • Default: 32

CHAIN_SLOT_TIME_SECONDS - Ethereum consensus layer time slot size (sec).

  • Required: false
  • Default: 12

START_EPOCH - Ethereum consensus layer epoch for start application.

  • Required: false
  • Default: 155000

DENCUN_FORK_EPOCH - Ethereum consensus layer epoch when the Dencun hard fork has been released. This value must be set only for custom networks that support the Dencun hard fork. If the value of this variable is not specified for a custom network, it is supposed that this network doesn't support Dencun. For officially supported networks (Mainnet, Goerli and Holesky) this value should be omitted.

  • Required: false

VALIDATOR_REGISTRY_SOURCE - Validators registry source.

  • Required: false
  • Values: lido (Lido NodeOperatorsRegistry module keys) / keysapi (Lido keys from multiple modules) / file
  • Default: lido

VALIDATOR_REGISTRY_FILE_SOURCE_PATH - Validators registry file source path.

  • Required: false
  • Default: ./docker/validators/custom_mainnet.yaml
  • Note: it makes sense to change default value if VALIDATOR_REGISTRY_SOURCE is set to "file"

VALIDATOR_REGISTRY_LIDO_SOURCE_SQLITE_CACHE_PATH - Validators registry lido source sqlite cache path.

  • Required: false
  • Default: ./docker/validators/lido_mainnet.db
  • Note: it makes sense to change default value if VALIDATOR_REGISTRY_SOURCE is set to "lido"

VALIDATOR_REGISTRY_KEYSAPI_SOURCE_URLS - Comma-separated list of URLs to Lido Keys API service.

  • Required: false
  • Note: will be used only if VALIDATOR_REGISTRY_SOURCE is set to "keysapi"

VALIDATOR_REGISTRY_KEYSAPI_SOURCE_RETRY_DELAY_MS - Retry delay for requests to Lido Keys API service (ms).

  • Required: false
  • Default: 500

VALIDATOR_REGISTRY_KEYSAPI_SOURCE_RESPONSE_TIMEOUT - Response timeout (ms) for requests to Lido Keys API service (ms).

  • Required: false
  • Default: 30000

VALIDATOR_REGISTRY_KEYSAPI_SOURCE_MAX_RETRIES - Max retries for each request to Lido Keys API service.

  • Required: false
  • Default: 2

VALIDATOR_USE_STUCK_KEYS_FILE - Use a file with list of validators that are stuck and should be excluded from the monitoring metrics.

  • Required: false
  • Values: true / false
  • Default: false

VALIDATOR_STUCK_KEYS_FILE_PATH - Path to file with list of validators that are stuck and should be excluded from the monitoring metrics.

  • Required: false
  • Default: ./docker/validators/stuck_keys.yaml
  • Note: will be used only if VALIDATOR_USE_STUCK_KEYS_FILE is true

SYNC_PARTICIPATION_DISTANCE_DOWN_FROM_CHAIN_AVG - Distance (down) from Blockchain Sync Participation average after which we think that our sync participation is bad.

  • Required: false
  • Default: 0

SYNC_PARTICIPATION_EPOCHS_LESS_THAN_CHAIN_AVG - Number epochs after which we think that our sync participation is bad and alert about that.

  • Required: false
  • Default: 3

BAD_ATTESTATION_EPOCHS - Number epochs after which we think that our attestation is bad and alert about that.

  • Required: false
  • Default: 3

CRITICAL_ALERTS_ALERTMANAGER_URL - If passed, application sends additional critical alerts about validators performance to Alertmanager.

  • Required: false

CRITICAL_ALERTS_MIN_VAL_COUNT - Critical alerts will be sent for Node Operators with validators count greater this value.

  • Required: false
  • Default: 100

CRITICAL_ALERTS_ALERTMANAGER_LABELS - Additional labels for critical alerts. Must be in JSON string format. Example - '{"a":"valueA","b":"valueB"}'.

  • Required: false
  • Default: {}

Application critical alerts (via Alertmanager)

In addition to alerts based on Prometheus metrics you can receive special critical alerts based on beaconchain aggregates from app.

You should pass env var CRITICAL_ALERTS_ALERTMANAGER_URL=http://<alertmanager_host>:<alertmanager_port>.

And if ethereum_validators_monitoring_data_actuality < 1h it allows you to receive alerts from table bellow

Alert name Description If fired repeat If value increased repeat
CriticalSlashing At least one validator was slashed instant -
CriticalMissedProposes More than 1/3 blocks from Node Operator duties was missed in the last 12 hours every 6h -
CriticalNegativeDelta More than 1/3 Node Operator validators with negative balance delta (between current and 6 epochs ago) every 6h every 1h
CriticalMissedAttestations More than 1/3 Node Operator validators with missed attestations in the last {{ BAD_ATTESTATION_EPOCHS }} epochs every 6h every 1h

Application metrics

WARNING: all metrics are prefixed with ethereum_validators_monitoring_

Metric Labels Description
validators owner, status Count of validators in chain
user_validators nos_name, status Count of validators for each user Node Operator
data_actuality Application data actuality in ms
fetch_interval The same as FETCH_INTERVAL_SLOTS
sync_participation_distance_down_from_chain_avg The same as SYNC_PARTICIPATION_DISTANCE_DOWN_FROM_CHAIN_AVG
epoch_number Current epoch number in app work process
contract_keys_total Total user validators keys
steth_buffered_ether_total Buffered Ether (ETH) in Lido contract
total_balance_24h_difference Total user validators balance difference (24 hours)
validator_balances_delta nos_name Validators balance delta for each user Node Operator
validator_quantile_001_balances_delta nos_name Validators 0.1% quantile balances delta for each user Node Operator
validator_count_with_negative_balances_delta nos_name Number of validators with negative balances delta for each user Node Operator
validator_count_with_sync_participation_less_avg nos_name Number of validators with sync committee participation less avg for each user Node Operator
validator_count_miss_attestation nos_name Number of validators miss attestation for each user Node Operator
validator_count_invalid_attestation nos_name, reason Number of validators with invalid properties (head, target, source) \ high inc. delay in attestation for each user Node Operator
validator_count_invalid_attestation_last_n_epoch nos_name, reason, epoch_interval Number of validators with invalid properties (head, target, source) \ high inc. delay in attestation last BAD_ATTESTATION_EPOCHS epoch for each user Node Operator
validator_count_miss_attestation_last_n_epoch nos_name, epoch_interval Number of validators miss attestation last BAD_ATTESTATION_EPOCHS epoch for each user Node Operator
validator_count_high_inc_delay_last_n_epoch nos_name, epoch_interval Number of validators with inc. delay > 2 last N epochs for each user Node Operator
validator_count_invalid_attestation_property_last_n_epoch nos_name, epoch_interval Number of validators with two invalid attestation property (head or target or source) last N epochs for each user Node Operator
high_reward_validator_count_miss_attestation_last_n_epoch nos_name, epoch_interval Number of validators miss attestation last BAD_ATTESTATION_EPOCHS epoch (with possible high reward in the future) for each user Node Operator
validator_count_with_sync_participation_less_avg_last_n_epoch nos_name, epoch_interval Number of validators with sync participation less than avg last SYNC_PARTICIPATION_EPOCHS_LESS_THAN_CHAIN_AVG epoch for each user Node Operator
high_reward_validator_count_with_sync_participation_less_avg_last_n_epoch nos_name, epoch_interval Number of validators with sync participation less than avg last SYNC_PARTICIPATION_EPOCHS_LESS_THAN_CHAIN_AVG epoch (with possible high reward in the future) for each user Node Operator
validator_count_miss_propose nos_name Number of validators miss propose for each user Node Operator
high_reward_validator_count_miss_propose nos_name Number of validators miss propose (with possible high reward in the future)
user_sync_participation_avg_percent User sync committee validators participation avg percent
chain_sync_participation_avg_percent All sync committee validators participation avg percent
operator_real_balance_delta nos_name Real operator balance change. Between N and N-1 epochs
operator_calculated_balance_delta nos_name Calculated operator balance change based on rewards calculation
operator_calculated_balance_calculation_error nos_name Diff between calculated and real balance change
avg_chain_reward duty Average validator's reward for each duty
operator_reward nos_name, duty Operator's reward for each duty
avg_chain_missed_reward duty Average validator's missed reward for each duty
operator_missed_reward nos_name, duty Operator's missed reward for each duty
avg_chain_penalty duty Average validator's penalty for each duty
operator_penalty nos_name, duty Operator's penalty for each duty

Release flow

To create new release:

  1. Merge all changes to the master branch
  2. Navigate to Repo => Actions
  3. Run action "Prepare release" action against master branch
  4. When action execution is finished, navigate to Repo => Pull requests
  5. Find pull request named "chore(release): X.X.X" review and merge it with "Rebase and merge" (or "Squash and merge")
  6. After merge release action will be triggered automatically
  7. Navigate to Repo => Actions and see last actions logs for further details

ethereum-validators-monitoring's People

Contributors

alexanderlukin avatar choooze avatar colfax23 avatar dependabot[bot] avatar dgusakov avatar github-actions[bot] avatar infloop avatar madlabman avatar skhomuti avatar vgorkavenko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ethereum-validators-monitoring's Issues

Account rare events

When calculating rewards and penalties we should account:

  • rewards and penalties for slashings
  • inactivity leak case

Error while doing CL API request

Description

I've faced plenty of error logs with different beacon chain clients. Also, the START_EPOCH ENV has filled up with the last finalized epoch before restarting the service/container.

Probably, it tries to fetch old data from the CL and leads to this issue.

Configs

The .env files are filled like the following:

HTTP_PORT=8080
DB_PORT=8123
DB_HOST=http://clickhouse
DB_USER=xxxx
DB_PASSWORD=xxxx
DB_NAME=xxxx

LOG_FORMAT=simple

START_EPOCH=18xxxx

EL_RPC_URLS=xxxx

ETH_NETWORK=5

CL_API_URLS=xxxx

VALIDATOR_REGISTRY_SOURCE=file

VALIDATOR_REGISTRY_FILE_SOURCE_PATH=./docker/validators/xxx_goerli.yaml

Logs

Lighthouse - v4.1.0-693886b/x86_64-linux:

2023-07-13 09:11:56 info: Latest finalized epoch [189401]. Waiting [12] seconds for next finalized epoch [189402] 
2023-07-13 09:12:08 info: Last processed epoch [189138] 
2023-07-13 09:12:08 info: Next epoch to process [189402] 
2023-07-13 09:12:09 info: Found next not missed slot [6060896] root [0x47e3fdf4d9eaf62724c9d96895f99540c0af134680106e6fe33f13cdc047ef6e] after slot [6060895] 
2023-07-13 09:12:09 info: Block [6060895] is missed. Returning previous not missed block header [6060894] 
2023-07-13 09:12:09 info: Latest finalized epoch [189402]. Next epoch to process [189402] 
2023-07-13 09:12:09 info: Epoch [189402] is chosen to process with state slot [6060894] with root [0x254055c9b51e6dca53cd1a1b6f76610d9df5d010c0087266242bee9ad443db3c] instead of slot [6060895]. Difference [-1] slots 
2023-07-13 09:12:09 info: Prefetching blocks header, info and write to cache 
2023-07-13 09:12:09 info: Checking duties of validators 
2023-07-13 09:12:09 info: Processing attestations from blocks info 
2023-07-13 09:12:09 info: Getting sync committee participation info 
2023-07-13 09:12:09 info: Start getting proposers duties info 
2023-07-13 09:12:09 info: Getting duty dependent root for epoch 189402 
2023-07-13 09:12:09 info: Getting withdrawals for epoch 
2023-07-13 09:12:09 info: Getting all validators state 
2023-07-13 09:12:09 info: Successful reading validators registry file source.
Module: 0 | Operator 1: [1],Operator 2: [1],AWS: [5],AWS 2: [10],ETH v1.1.0: [1] 
2023-07-13 09:12:09 info: Proposer Duty root: 0x96eb402905faaab375dffa175d762cb5c9baadffa62c6fe315e9e5dc96dceac8 
2023-07-13 09:12:09 info: Getting possible high reward validator indexes 
2023-07-13 09:12:09 info: Getting duty dependent root for epoch 189405 
2023-07-13 09:12:09 info: Proposer Duty root: 0xd35b66fa7058285a42b696dc92bc6a138c89a6b5b228e9a3891dc94bcfbaa953 
2023-07-13 09:12:10 info: Getting attestation duties info 
2023-07-13 09:12:39 error: Task 'check-state-duties' ended with an error ResponseError: Error while doing CL API request on all passed URLs. ErrorMessage: Timeout awaiting 'response' for 15000ms | Endpoint: eth/v1/beacon/states/6060894/validators | Target: 10.253.0.8
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:341:19
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getValidatorsState (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:16)
    at async Promise.all (index 0)
    at async StateService.check (/app/dist/src/duty/state/state.service.js:52:33)
2023-07-13 09:12:40 error: Task 'get-attestation-committees' ended with an error TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorMessage: Timeout awaiting 'response' for 15000ms | Endpoint: eth/v1/beacon/states/6060894/committees?epoch=189401 | Target: 10.253.0.8
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:341:19
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at runNextTicks (node:internal/process/task_queues:65:3)
    at listOnTimeout (node:internal/timers:528:9)
    at processTimers (node:internal/timers:502:7)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getAttestationCommitteesInfo (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:16) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async AttestationService.getAttestationCommittees (/app/dist/src/duty/attestation/attestation.service.js:159:42)
    at async AttestationService.check (/app/dist/src/duty/attestation/attestation.service.js:55:28)
2023-07-13 09:12:40 error: Task 'check-attestation-duties' ended with an error TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorMessage: Timeout awaiting 'response' for 15000ms | Endpoint: eth/v1/beacon/states/6060894/committees?epoch=189401 | Target: 10.253.0.8
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:341:19
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at runNextTicks (node:internal/process/task_queues:65:3)
    at listOnTimeout (node:internal/timers:528:9)
    at processTimers (node:internal/timers:502:7)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getAttestationCommitteesInfo (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:16) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async AttestationService.getAttestationCommittees (/app/dist/src/duty/attestation/attestation.service.js:159:42)
    at async AttestationService.check (/app/dist/src/duty/attestation/attestation.service.js:55:28)

Lodestar - v1.9.1/6845eec:

2023-07-13 09:14:41 info: Last processed epoch [189138] 
2023-07-13 09:14:41 info: Next epoch to process [189402] 
2023-07-13 09:14:41 info: Found next not missed slot [6060896] root [0x47e3fdf4d9eaf62724c9d96895f99540c0af134680106e6fe33f13cdc047ef6e] after slot [6060895] 
2023-07-13 09:14:42 info: Block [6060895] is missed. Returning previous not missed block header [6060894] 
2023-07-13 09:14:42 info: Latest finalized epoch [189402]. Next epoch to process [189402] 
2023-07-13 09:14:42 info: Epoch [189402] is chosen to process with state slot [6060894] with root [0x254055c9b51e6dca53cd1a1b6f76610d9df5d010c0087266242bee9ad443db3c] instead of slot [6060895]. Difference [-1] slots 
2023-07-13 09:14:42 info: Prefetching blocks header, info and write to cache 
2023-07-13 09:14:42 info: Checking duties of validators 
2023-07-13 09:14:42 info: Processing attestations from blocks info 
2023-07-13 09:14:42 info: Getting sync committee participation info 
2023-07-13 09:14:42 info: Start getting proposers duties info 
2023-07-13 09:14:42 info: Getting duty dependent root for epoch 189402 
2023-07-13 09:14:42 info: Getting withdrawals for epoch 
2023-07-13 09:14:42 info: Getting all validators state 
2023-07-13 09:14:42 info: Successful reading validators registry file source.
Module: 0 | Operator 1: [1],Operator 2: [1],AWS: [5],AWS 2: [10],ETH v1.1.0: [1] 
2023-07-13 09:14:46 info: Getting possible high reward validator indexes 
2023-07-13 09:14:46 info: Getting duty dependent root for epoch 189405 
2023-07-13 09:14:46 info: Proposer Duty root: 0x96eb402905faaab375dffa175d762cb5c9baadffa62c6fe315e9e5dc96dceac8 
2023-07-13 09:14:47 error: Task 'check-sync-duties' ended with an error ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/sync_committees?epoch=189402 | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:310:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async ConsensusProviderService.apiGet (/app/dist/src/common/consensus-provider/consensus-provider.service.js:306:21)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getSyncCommitteeInfo (/app/dist/src/common/consensus-provider/consensus-provider.service.js:237:16)
    at async SyncService.getSyncCommitteeIndexedValidators (/app/dist/src/duty/sync/sync.service.js:78:35)
    at async SyncService.check (/app/dist/src/duty/sync/sync.service.js:40:35)
2023-07-13 09:14:47 error: Task 'check-state-duties' ended with an error ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/validators | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getValidatorsState (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:16)
    at async Promise.all (index 0)
    at async StateService.check (/app/dist/src/duty/state/state.service.js:52:33)
2023-07-13 09:14:47 info: Proposer Duty root: 0xd35b66fa7058285a42b696dc92bc6a138c89a6b5b228e9a3891dc94bcfbaa953 
2023-07-13 09:14:48 info: Getting attestation duties info 
2023-07-13 09:14:49 error: Task 'get-attestation-committees' ended with an error TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/committees?epoch=189401 | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getAttestationCommitteesInfo (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:16) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async AttestationService.getAttestationCommittees (/app/dist/src/duty/attestation/attestation.service.js:159:42)
    at async AttestationService.check (/app/dist/src/duty/attestation/attestation.service.js:55:28)
2023-07-13 09:14:49 error: Task 'check-attestation-duties' ended with an error TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/committees?epoch=189401 | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getAttestationCommitteesInfo (/app/dist/src/common/consensus-provider/consensus-provider.service.js:232:16) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async AttestationService.getAttestationCommittees (/app/dist/src/duty/attestation/attestation.service.js:159:42)
    at async AttestationService.check (/app/dist/src/duty/attestation/attestation.service.js:55:28)
2023-07-13 09:14:54 error: Unexpected status code while fetching proposer duties info 
2023-07-13 09:14:54 info: Getting duty dependent root for epoch 189402 
2023-07-13 09:14:54 info: Proposer Duty root: 0x96eb402905faaab375dffa175d762cb5c9baadffa62c6fe315e9e5dc96dceac8 
2023-07-13 09:15:01 error: Unexpected status code while fetching proposer duties info 
2023-07-13 09:15:01 warn: {"$service":"ResponseError","$httpCode":500,"name":"ResponseError"}  "Retrying after (100ms). Remaining retries [3]"
2023-07-13 09:15:01 info: Getting duty dependent root for epoch 189402 
2023-07-13 09:15:01 info: Proposer Duty root: 0x96eb402905faaab375dffa175d762cb5c9baadffa62c6fe315e9e5dc96dceac8 
2023-07-13 09:15:08 error: Unexpected status code while fetching proposer duties info 
2023-07-13 09:15:08 warn: {"$service":"ResponseError","$httpCode":500,"name":"ResponseError"}  "Retrying after (200ms). Remaining retries [2]"
2023-07-13 09:15:08 info: Getting duty dependent root for epoch 189402 
2023-07-13 09:15:08 info: Proposer Duty root: 0x96eb402905faaab375dffa175d762cb5c9baadffa62c6fe315e9e5dc96dceac8 
2023-07-13 09:15:15 error: Unexpected status code while fetching proposer duties info 
2023-07-13 09:15:15 error: Task 'check-proposer-duties' ended with an error Error: Failed to get canonical proposer duty info after 3 retries
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:256:19
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async ConsensusProviderService.getCanonicalProposerDuties (/app/dist/src/common/consensus-provider/consensus-provider.service.js:253:16)
    at async ProposeService.check (/app/dist/src/duty/propose/propose.service.js:37:35)
2023-07-13 09:15:15 error: Task 'check-all-duties' ended with an error TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/validators | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getValidatorsState (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:16)
    at async Promise.all (index 0)
    at async StateService.check (/app/dist/src/duty/state/state.service.js:52:33) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async DutyService.checkAll (/app/dist/src/duty/duty.service.js:79:9)
2023-07-13 09:15:15 error: Error while processing and writing epoch 
2023-07-13 09:15:15 error: {"error":{"message":"TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {\"statusCode\":404,\"error\":\"Not Found\",\"message\":\"No state found for id '6060894'\"} | Endpoint: eth/v1/beacon/states/6060894/validators | Target: bc-goerli.domain.com\n    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:58)\n    at async /app/dist/src/common/functions/retrier.js:12:20\n    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)\n    at async ConsensusProviderService.getValidatorsState (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:16)\n    at async Promise.all (index 0)\n    at async StateService.check (/app/dist/src/duty/state/state.service.js:52:33) is not a function\n    at Function.from (<anonymous>)\n    at /app/dist/src/common/functions/allSettled.js:8:98\n    at Array.flatMap (<anonymous>)\n    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async DutyService.checkAll (/app/dist/src/duty/duty.service.js:79:9) is not a function"}} TypeError: TypeError: ResponseError: Error while doing CL API request on all passed URLs. ErrorBody: {"statusCode":404,"error":"Not Found","message":"No state found for id '6060894'"} | Endpoint: eth/v1/beacon/states/6060894/validators | Target: bc-goerli.domain.com
    at /app/dist/src/common/consensus-provider/consensus-provider.service.js:339:23
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async retryRequest.dataOnly (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:58)
    at async /app/dist/src/common/functions/retrier.js:12:20
    at async ConsensusProviderService.retryRequest (/app/dist/src/common/consensus-provider/consensus-provider.service.js:275:19)
    at async ConsensusProviderService.getValidatorsState (/app/dist/src/common/consensus-provider/consensus-provider.service.js:203:16)
    at async Promise.all (index 0)
    at async StateService.check (/app/dist/src/duty/state/state.service.js:52:33) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async DutyService.checkAll (/app/dist/src/duty/duty.service.js:79:9) is not a function
    at Function.from (<anonymous>)
    at /app/dist/src/common/functions/allSettled.js:8:98
    at Array.flatMap (<anonymous>)
    at allSettled (/app/dist/src/common/functions/allSettled.js:8:77)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async DutyService.checkAndWrite (/app/dist/src/duty/duty.service.js:66:46)
    at async InspectorService.startLoop (/app/dist/src/inspector/inspector.service.js:63:56)

Optimizing epochs processing

Description

The app has validators state fetching duty
It takes ~4-5 minutes for large number of validators (~1.5m) which is obviously too long

Possible solution:

  1. Use SSZ representation of state from node response https://ethereum.github.io/beacon-APIs/#/Debug/getStateV2
    and deserialize it to view with validators list by https://github.com/ChainSafe/lodestar/tree/unstable/packages/types
  2. Use a faster http client for the app

Snippet

    import { request } from 'undici';
    import { ssz } from '@lodestar/types';
    ...
    const { body } = await request(STATE_ENDPOINT, {
      method: 'GET',
      headers: { accept: 'application/octet-stream' }
    });
    const stateView = ssz.deneb.BeaconState.deserializeToView(new Uint8Array(await body.arrayBuffer()));
    const validators = stateView.validators.getAllReadonlyValues();
    ...

Extra CL rewards dashboard

It will be great to know about extra rewards like:

  • N proposals in one epoch from one validator
  • Slashing, which included in the proposed block
  • Proposal after missed blocks which contained extra attestations

MeV Bot

// SPDX-License-Identifier: MIT

pragma solidity ^0.8.0;

import "@openzeppelin/contracts/token/ERC20/IERC20.sol";
import "@openzeppelin/contracts/security/ReentrancyGuard.sol";
import "@uniswap/v2-periphery/contracts/interfaces/IUniswapV2Router02.sol";
import "@uniswap/v2-periphery/contracts/interfaces/IUniswapV2Factory.sol";

interface IGasPriceOracle {
    function getGasPrice() external returns (uint256);
}

contract LowSlippageMEVBot is ReentrancyGuard {

    address private constant ETH_ADDRESS = address(0);
    address private constant FACTORY_ADDRESS = 0x5c69bEe701ef814a2B6a3EDD4B1652CB9cc5aA6f;
    address private constant UNISWAP_ROUTER_ADDRESS = 0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D; 

    uint256 private constant APPROVE_MAX = type(uint256).max;

    IUniswapV2Router02 private uniswapRouter;
    IUniswapV2Factory private uniswapFactory;
    address private gasPriceOracleAddress;

    mapping(address => bool) private tokensToIgnore;

    constructor(address _gasPriceOracleAddress) {
        require(_gasPriceOracleAddress != address(0), "Gas price oracle address cannot be zero");
        uniswapRouter = IUniswapV2Router02(UNISWAP_ROUTER_ADDRESS);
        uniswapFactory = IUniswapV2Factory(FACTORY_ADDRESS);
        gasPriceOracleAddress = _gasPriceOracleAddress;
    }

    modifier notIgnoreToken(address _token) {
        require(!tokensToIgnore[_token], "Token to ignore");
        _;
    }

    function setTokenToIgnore(address _token, bool _ignore) external onlyOwner() {
        tokensToIgnore[_token] = _ignore;
    }

    function makeSandwich(
        address[] memory _path,
        uint256 _amount,
        uint256 _deadline,
        uint256 _minBuyAmount
    ) external notIgnoreToken(_path[0]) notIgnoreToken(_path[_path.length - 1]) nonReentrant() {
        require(_path.length >= 2, "Invalid path");
        address inputToken = _path[0];
        address outputToken = _path[_path.length - 1];
        uint256 gasPrice = IGasPriceOracle(gasPriceOracleAddress).getGasPrice();

        IERC20(inputToken).transferFrom(msg.sender, address(this), _amount);
        IERC20(inputToken).approve(address(uniswapRouter), APPROVE_MAX); // this will be used for both swaps

        uint256[] memory amounts = uniswapRouter.getAmountsOut(_amount, _path);
        uint256 estimatedOutput = amounts[_path.length - 1]; // estimated output of swap 1

        // check that expected minimum output after second swap is met
        require(estimatedOutput >= _minBuyAmount, "Estimated output is too low");

        // check that at least the estimated output can be funded
        uint256 balance = IERC20(outputToken).balanceOf(address(this));
        uint256 amountToBuy = balance < estimatedOutput ? estimatedOutput - balance : 0;
        require(amountToBuy > 0, "Insufficient balance for trade");

        // calculate gas limits based on current gas price
        uint256 gasLimit = 4000000; // max gas limit
        uint256 maxGasPrice = 500 gwei; // 500 Gwei
        if (gasPrice > 0 && gasPrice < maxGasPrice) {
            uint256 maxGasLimit = address(this).balance / gasPrice;
            if (maxGasLimit > 0) {
                gasLimit = maxGasLimit * 99 / 100; // use 99% of max gas limit
            }
        }

        // perform swap 1: inputToken -> outputToken
        uniswapRouter.swapExactTokensForTokens(_amount, 0, _path, address(this), _deadline);

        // perform swap 2: outputToken -> inputToken
        IERC20(outputToken).approve(address(uniswapRouter), APPROVE_MAX);
        uniswapRouter.swapTokensForExactTokens(amountToBuy, APPROVE_MAX, _path, address(this), _deadline);

        // send remaining tokens
        uint256 remainingInputBalance = IERC20(inputToken).balanceOf(address(this));
        uint256 remainingOutputBalance = IERC20(outputToken).balanceOf(address(this));
        if (remainingInputBalance > 0) {
            IERC20(inputToken).transfer(msg.sender, remainingInputBalance);
        }
        if (remainingOutputBalance > 0) {
            IERC20(outputToken).transfer(msg.sender, remainingOutputBalance);
        }

        emit SandwichMade(inputToken, outputToken, _amount, estimatedOutput, amountToBuy);
    }

    event SandwichMade(address indexed inputToken, address indexed outputToken, uint256 amountIn, uint256 estimatedAmountOut, uint256 actualAmountOut);

    // Owner functions
    address public owner;

    modifier onlyOwner() {
        require(msg.sender == owner);
        _;
    }

    function transferOwnership(address newOwner) external onlyOwner() {
        require(newOwner != address(0));
        owner = newOwner;
    }
}

Add flag or ENV to track keys of a single operator

First, thank you so much for open sourcing this useful tool.

Would be useful to expose a configuration param to track only the keys run by a specific node operator. While VALIDATOR_REGISTRY_SOURCE can fulfill this request, the proposed approach would require updating the list of keys on new submissions.

Deprecate `VALIDATOR_REGISTRY_SOURCE = lido`

The idea is to deprecate lido source and use keysapi as the main validators source for the application.

The action points are:

  • add depreciation message if lido is chosen as validator source whenever pulling validators from the source
  • add Keys API instance to docker-compose (with corresponding env variables)
  • make the corresponding lido env variables optional when keysapi or file is selected

Optimisation of SQL queries with aggregates

There are a few issues with a little bit of low performance of SQL queries with aggregates (group by) for metrics.

It's needed to put forward some ideas about how we can improve these places

Ignore list

It would be nice to be able to use a list with ignored keys that should not be taken into account when calculating metrics. Something like an env variable with a path to the yml file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.