Giter Club home page Giter Club logo

blutgang's People

Contributors

dependabot[bot] avatar engn33r avatar github-actions[bot] avatar jhvst avatar makemake-kbo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

blutgang's Issues

bug: `forge test` compatibility

Describe the bug
Forked foundry tests do not run and Blutgang throws errors.

To Reproduce
Steps to reproduce the behavior:

  1. start blutgang with default config
  2. run forge fork test

Expected behavior
The RPCs work without Blutgang, so I would expect Blutgang to work as well.

Specs:

  • OS: windows
  • Kernel: 5.10.16.3-microsoft-standard-WSL2
  • Blutgang version: current

Additional context

RUST_LOG=ethers=trace forge test

2023-12-14T07:46:51.884965Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.885107Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.885172Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.885980Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.886062Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.886127Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::provider: tx
2023-12-14T07:46:51.886623Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.886811Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887134Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887273Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887422Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887538Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887742Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.887914Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888080Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888266Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888429Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888601Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888755Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.888912Z TRACE rpc{method="eth_chainId" params="null"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.889103Z TRACE rpc{method="eth_getBlockByNumber" params="[\"0x10abafe\",false]"}: ethers_providers::rpc::transports::retry: retrying due to spurious network err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })
2023-12-14T07:46:51.889268Z TRACE rpc{method="eth_gasPrice" params="null"}: ethers_providers::rpc::transports::retry: should not retry err=HTTPError(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(3000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidMessage(InvalidContentType) } }) })

Running 1 test for test/invariant/EigenZapInvariants.t.sol:EigenZapInvariants

blutgang --config <DEFAULT_CONFIG>

Info: Using config file at blutgang.toml
Sorting RPCs by latency...
https://eth.llamarpc.com: 39629091.3ns
https://rpc.ankr.com/eth: 66638958.1ns
Info: Bound to: 127.0.0.1:3000
Wrn: Reorg detected!
Removing stale entries from the cache.
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: New finalized block!
Removing stale entries from the cache.
Info: Checking RPC health... OK!
Info: Connection from: 127.0.0.1:51438
Info: Connection from: 127.0.0.1:51440
Info: Connection from: 127.0.0.1:51442
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51444
Info: Connection from: 127.0.0.1:51446
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51448
Info: Connection from: 127.0.0.1:51450
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51452
Info: Connection from: 127.0.0.1:51454
Info: Connection from: 127.0.0.1:51456
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51458
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51460
Info: Connection from: 127.0.0.1:51462
Err: Error serving connection: hyper::Error(Parse(Method))
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51464
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51466
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51468
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51470
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Connection from: 127.0.0.1:51472
Err: Error serving connection: hyper::Error(Parse(Method))
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... OK!
Info: New finalized block!
Removing stale entries from the cache.

Malformed cache response

Describe the bug
Blutgang will sometimes respond with {"id":1,"jsonrpc":"2.0","result":null} when returning results from the cache.

To Reproduce
Steps to reproduce the behavior:

  1. Make a querry Blutgang is able to cache and respond with
  2. Sometimes you will get an invalid response

Expected behavior
Blutgang should never cache invalid responses like {"id":1,"jsonrpc":"2.0","result":null}

How to clear the cache?

Is your feature request related to a problem? Please describe.

If I'm testing the speed of blutgang and want to clear the cache, is there a CLI arg to do so?

Describe the solution you'd like

A CLI arg to clear the cache. Another option is documenting a manual process in the README.

Describe alternatives you've considered

Documenting a manual process.

Additional context

N/A

Connecting to forta: Timeout exceeded while awaiting headers

Describe the bug
Balancer is really fast.
Using in metamask/frame everything seems to be working perfectly fine.

But I have some troubles in project called Forta network.
Forta network aims to scan EVM chains and analyze transactions. It requires really high performance of RPC.

I tried to run Blutgang in docker, second time to compile it with last commits. It starts successfully, but every time i get something like these different errors in my nodes.

context deadline exceeded (Client.Timeout exceeded while awaiting headers)

bot context is done - exiting request processing loop

context deadline exceeded\n\t* eth_chainId failed

Once again:

  1. Running node connected to any other rpc and with my own polygon rpc - everything is OK
  2. Only when running with blutgang with any single rpc - Errors happen

I guess, it is not possible as a user to quicly and fully reproduce the error without running in production and staking some tokens to get final scores.
Anyway, it is possible to see some unexpected behavior if running locally.

To Reproduce
Steps to reproduce the behavior:

  1. Docker must be installed and running
  2. Go to forta's github to install from source or get the already compiled app
  3. First run to init the scanner and generate its wallet:
forta init --passphrase Your-test-password-123
  1. Edit the config of forta located ~/.forta/config.yml for polygon scanning:
chainId: 137

scan:
  jsonRpc:
    # real working rpc 
    url: https://polygon-rpc.com/

   # blutgang with that rpc under it to uncomment and run next time
   # url: http://ip:port
   
trace:
  enabled: false
  1. Run the scan node:
forta run --passphrase Your-test-password-123 --no-check

It should start 8 containers, container named "forta-scanner" must be there
All stats can be displayed by running forta status all

Expected behavior
After command forta status all | grep json-rpc-performance I expect to see that:

⬤ unknown | forta.container.forta-scanner.service.estimator.json-rpc-performance

in ~5 minutes to become something like this:

⬤ info | forta.container.forta-scanner.service.estimator.json-rpc-performance | 0.97

In a normal way it shows performance of rpc in float between 0-1, which does not happen with blutgang and stucks on previous previous step

Specs:

  • OS: Ubuntu 22.04
  • Kernel: 5.15.0-86-generic
  • Blutgang version: blutgang 0.2.0 Myrddin (last source code) and docker image

Blutgang options:

  • DB mode: HighThroughput
  • DB compression: no
  • Cache capacity: 1000000000
  • MA length: 10, also 20 and 100
  • RPCs used: llamarpc, polygon-rpc, quiknode, my own well working rpcs (together and separately, polygon RPCs)

Additional context
Add any other context about the problem here.

`Invalid argument type` when using Blutgang with viem

Describe the bug
When using a local Blutgang endpoint as an HTTP transport in viem, I get TypeError: Invalid argument type in ToBigInt operation.

To Reproduce

Steps to reproduce the behavior:

  1. Run Blutgang locally with the default config
  2. Create a simple script with viem and use the local Blutgang endpoint as the transport (tested with getBlockNumber() and getBalance())
import { createPublicClient, http } from 'viem'
import { mainnet } from 'viem/chains'

const client = createPublicClient({
  chain: mainnet,
  transport: http('http://127.0.0.1:3000/'),
})

const data = await client.getBlockNumber()
console.log(data)
  1. See error

Expected behavior
The current block number is logged

Specs:

  • OS: macOS 13.6
  • Blutgang version: 0.2.0 Myrddin
  • Viem version: 1.18.9

Blutgang options:
Default config (llamarpc and ankr RPCs)

Additional context
The viem script runs as expected when using either the Llama or Ankr RPCs directly

Bug: Duplicate subscription IDs

Describe the bug
If using multiple nodes, subscriptions that are sent to different nodes may have the same subscription id. This causes problems because we forward subscription IDs sent by nodes directly to users. Users may end up receiving subscriptions they did not sign up for.

To Reproduce
Steps to reproduce the behavior:

  1. Subscribe on node 0 (sub id 0x1)
  2. Subscribe on node 1 (sub id 0x1)
  3. Get duplicate subscriptions due to matching sub id

Expected behavior
This not happening

Add logging

Add support for writing proper logs. Could be useful for compliance and debugging

Tokio panic when proxying a local (syncing/unsynced) reth node.

Describe the bug
Tokio is panicking with value : InvalidResponse("error: Invalid response") on a local reth mainnet full node.

To Reproduce
Steps to reproduce the behavior:

  1. Run a reth full node with the configs below:
/usr/local/bin/reth node --full --metrics 10.0.0.4:9002 --datadir /data/reth-db/mainnet --rpc-gas-cap 18446744073709551615  --authrpc.jwtsecret /secrets/jwt.hex --authrpc.addr 127.0.0.1 --authrpc.port 8551 --http --ws --rpc-max-connections 429496729 --http.api eth,net,web3,txpool --http.addr 10.0.0.4 --http.port 8545 --ws.api net,web3,eth,txpool --ws.addr 10.0.0.4 --http.corsdomain 'https://app.example.com,moz-extension://123123123,chrome-extension://123123123'
  1. Hook it up inside blutgang then start blutgang
  2. See error

Expected behavior
I expect blutgang to put the "faulty/syncing" RPC on the backburner and as if the ratelimit was hit instead of dying all at once.

Specs:

  • OS: [Ubuntu 22.04]
  • Kernel: [6.5.0-1015-azure]
  • Blutgang version: [Blutgang 0.3.2 Garreg Mach - maxperf - arm64]

Blutgang options:

  • DB mode: [HighThroughput]
  • DB compression: [no]
  • Cache capacity: [10000000000]
  • MA length: [20]
  • RPCs used: [local reth]

Additional context
reth node is still syncing, - so that could be an issue, however even with a different RPC added, it does not fail over just dies.

Logs below with the local node and merkle being added:

Sorting RPCs by latency...
http://10.0.0.4:8545/: 109063.82ns
https://eth.merkle.io/: 25649351.68ns
Info: Starting Blutgang 0.3.2 Garreg Mach
Info: Bound to: 10.0.0.4:7999
Wrn: Reorg detected!
Removing stale entries from the cache.
Info: Adding user 1 to sink map
Info: Subscribe_user finding: ["newHeads"]
LA 63471.91
Info: WS request time: 17.88µs
Info: sub_id: 0x68a8ceb7a0ea6575888bd14c08f94b6c
Info: Register_subscription inserting: ["newHeads"]
Info: Subscribe_user finding: ["newHeads"]
Info: Subscribe_user finding: ["newHeads"]
Info: Checking RPC health... Wrn: http://10.0.0.4:8545/ is falling behind! Removing froma active RPC pool.
OK!
Info: Checking RPC health... OK!
Info: Checking RPC health... Info: http://10.0.0.4:8545/ is following the head again! Added to active RPC pool.
OK!
thread 'tokio-runtime-worker' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/blutgang-0.3.2/src/health/safe_block.rs:99:42:
called `Result::unwrap()` on an `Err` value: InvalidResponse("error: Invalid response")
stack backtrace:
   0:     0xaaaad592f7dc - std::backtrace_rs::backtrace::libunwind::trace::hee9690ac22774636
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/../../backtrace/src/backtrace/libunwind.rs:104:5
   1:     0xaaaad592f7dc - std::backtrace_rs::backtrace::trace_unsynchronized::ha30111b5438e6e61
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0xaaaad592f7dc - std::sys_common::backtrace::_print_fmt::hc2516686a74b2a42
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:68:5
   3:     0xaaaad592f7dc - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h8984c88846573cbb
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:44:22
   4:     0xaaaad57ad858 - core::fmt::rt::Argument::fmt::h071bdaa21123c9ed
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/fmt/rt.rs:142:9
   5:     0xaaaad57ad858 - core::fmt::write::h3f4921a7ddfa57a8
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/fmt/mod.rs:1120:17
   6:     0xaaaad5907678 - std::io::Write::write_fmt::h0923e211983fe028
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/io/mod.rs:1810:15
   7:     0xaaaad5930dc0 - std::sys_common::backtrace::_print::h39d471a7e51d9dbd
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:47:5
   8:     0xaaaad5930dc0 - std::sys_common::backtrace::print::h6306cb106d0c42e1
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:34:9
   9:     0xaaaad59306b8 - std::panicking::default_hook::{{closure}}::h2a94c4f92161a016
  10:     0xaaaad593159c - std::panicking::default_hook::hd3c29c68b55e9f50
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:292:9
  11:     0xaaaad593159c - std::panicking::rust_panic_with_hook::ha00bbb72a4f1b899
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:779:13
  12:     0xaaaad59310d8 - std::panicking::begin_panic_handler::{{closure}}::had2c64361be4b07b
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:657:13
  13:     0xaaaad5931040 - std::sys_common::backtrace::__rust_end_short_backtrace::hf92a1e94dde0ed69
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:171:18
  14:     0xaaaad5931038 - rust_begin_unwind
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:645:5
  15:     0xaaaad568566c - core::panicking::panic_fmt::h815b849997a1324d
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:72:14
  16:     0xaaaad56859ac - core::result::unwrap_failed::haf618c6eb17075e2
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/result.rs:1649:5
  17:     0xaaaad573eba8 - blutgang::health::safe_block::get_safe_block::{{closure}}::{{closure}}::h7da343b20caba446
  18:     0xaaaad5702238 - tokio::runtime::task::raw::poll::h995b5616edeb2c00
  19:     0xaaaad593ea3c - tokio::runtime::scheduler::multi_thread::worker::Context::run_task::h362785fb6d35bcfe
  20:     0xaaaad5944850 - tokio::runtime::task::raw::poll::h3b10e65a8244514b
  21:     0xaaaad5937608 - std::sys_common::backtrace::__rust_begin_short_backtrace::h29cb6c7d80a2c68d
  22:     0xaaaad5937300 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h2995408fccc80e98
  23:     0xaaaad593349c - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::he98dd9388c7047c2
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
  24:     0xaaaad593349c - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h103af4b9c154ce59
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
  25:     0xaaaad593349c - std::sys::unix::thread::Thread::new::thread_start::hc59882c1f8885c71
                               at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys/unix/thread.rs:108:17
  26:     0xffffad63d5c8 - <unknown>
  27:     0xffffad6a5edc - <unknown>
  28:                0x0 - <unknown>
Wrn: Timeout in newHeads subscription, possible connection failiure or missed block.
Err: Failed to get some failed node subscription IDs! Subscriptions might be silently dropped!
Wrn: Timeout in newHeads subscription, possible connection failiure or missed block.
Err: Failed to get some failed node subscription IDs! Subscriptions might be silently dropped!
Wrn: Timeout in newHeads subscription, possible connection failiure or missed block.
Err: Failed to get some failed node subscription IDs! Subscriptions might be silently dropped!
Err: No RPC position available
Info: Connection from: 10.0.0.7:47919
Info: Connection from: 10.0.0.7:36906
Info: Connection from: 10.0.0.7:61302
Info: Connection from: 10.0.0.7:23063
Info: Forwarding to: 
Info: Request time: 45.620282ms
Info: Connection from: 10.0.0.7:19736
Info: Forwarding to: 
Info: Request time: 87.366692ms
Info: Forwarding to: 
Info: Request time: 130.822258ms
Info: Forwarding to: 
Info: Request time: 172.721387ms
Info: Forwarding to: 
Info: Request time: 178.809337ms
Wrn: Timeout in newHeads subscription, possible connection failiure or missed block.
Err: Failed to get some failed node subscription IDs! Subscriptions might be silently dropped!
Wrn: Couldn't deserialize ws_conn response: Eof at character 0

ethspam response:

(base) ➜  bin ./ethspam | ./versus --stop-after=5 --concurrency=5 https://mainnet.example.com
Endpoints:

0. "https://mainnet.example.com"

   Requests:   7.73 per second
   Timing:     0.6472s avg, 0.5578s min, 0.7360s max
               0.0627s standard deviation

   Percentiles:
     25% in 0.6495s
     50% in 0.6900s
     75% in 0.7360s
     90% in 0.7360s
     95% in 0.7360s
     99% in 0.7360s

   Errors: 100.00%
     5 × "bad status code: 500"     <--------- completely dead

** Summary for 1 endpoints:
   Completed:  5 results with 5 total requests
   Timing:     647.231558ms request avg, 1.199269542s total run time
   Errors:     5 (100.00%)
   Mismatched: 0

Unclear status messages

Describe the bug
Possible nitpicks (except for "reorg detected" observation), but the verbose output messages should be consistent for actions that are in progress and require time to process vs. status messages.

To Reproduce
Steps to reproduce the behavior:

  1. Run blutgang with default config and observe this output
    image

Expected behavior

  1. The first line Using config file at config.toml... ends with ... which makes it appear as though something is happening in the background. Most likely this is a status message. Consider removing ...
  2. The "Reorg detected" message appears every time that I run blutgang, even when I quickly kill blutgang and run it again so that the block number has not changed. Are the stale entries actually removed from the cache? This message indicates likely no
  3. Removing stale entries from the cache... does not have an OK! after the ..., which makes it unclear if this step has finished or is still in progress. Ideally a progress indicator could be shown for any step that may take non-trivial time with OK! appearing instead of the progress indicator when the step is complete.

blutgang_admin namespace

Add a blutgang_admin namespace for administrative tasks and live config changes.

These should include:
(todo)

Unexpected error from config file settings

Describe the bug

When health_check = false is set in config.toml, the health_check_ttl value is still required.

To Reproduce
Steps to reproduce the behavior:

  1. Download repo
  2. Copy example_config.toml to config.toml
  3. Set health_check = false and remove the line with health_check_ttl
  4. Run blutgang and see error

Expected behavior
No error

Specs:

  • OS: Ubuntu 22.04
  • Blutgang version: 0.1.0

Blutgang options:
N/A, just config file

sled 1.0

Migrate to sled 1.0. Big performance and storage improvements.

Biggest blocker for this is 1.0 Db not being Send. If sled Db remains !Sendable we'll probably have to stay on 0.34

Error with zero RPCs

Describe the bug

I ran blutgang with 1 or more RPC endpoints in the config. Then I removed all the RPCs from the config so blutgang uses only cached data. When I run blutgang with zero RPC endpoints in the config, blutgang works but I get this warning:

thread 'tokio-runtime-worker' panicked at 'attempt to divide by zero', src/health/check.rs:106:24
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

image

To Reproduce
Steps to reproduce the behavior:

  1. Run Blutgang with default config and request data from it with sothis
  2. Remove RPCs from config
  3. Run blutgang again
  4. Observe warning from blutgang

Expected behavior

No warning, or a warning that no RPCs are specified in the config

Specs:

  • OS: Ubuntu 22.04
  • Blutgang version: 0.1.0

Blutgang options:
Default config file

Additional context
N/A

[BUG] Error with one RPC

Describe the bug

Blutgang with multiple RPCs encountered an error and cached this error, meaning I cannot run sothis unless I clear the blutgang cache.

To Reproduce
Steps to reproduce the behavior:

  1. Add 4 RPCs into the config, then run blutgang with 'blutgang'
  2. Run sothis with sothis --mode fast_track --source_rpc http://127.0.0.1:3000 --contract_address 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 --storage_slot 80084422859880547211683076133703299733277748156566366325829078699459944778999 --origin_block 16790000 --terminal_block 16810000 --query_interval 100 --filename tripool_usdc.json
  3. Observe error message in blutgang:
Info: Forwarding to: https://eth.llamarpc.com
Info: Request time: 1.776612253s
LA 1072738384.7
Info: Forwarding to: https://mainnet.infura.io/v3/XXX
Info: Forwarding to: https://mainnet.infura.io/v3/XXX
Info: Forwarding to: https://mainnet.infura.io/v3/XXX
Info: Forwarding to: https://mainnet.infura.io/v3/XXX
Info: Forwarding to: https://mainnet.infura.io/v3/XXX
Info: Checking RPC health... Info: Forwarding to: https://eth.llamarpc.com
Info: Request time: 1.621386758s
thread 'tokio-runtime-worker' panicked at 'index out of bounds: the len is 1 but the index is 1', /home/engn33r/.cargo/registry/src/index.crates.io-6f17d22bba15001f/blutgang-0.1.1/src/balancer/balancer.rs:290:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: PoisonError { .. }', /home/engn33r/.cargo/registry/src/index.crates.io-6f17d22bba15001f/blutgang-0.1.1/src/health/check.rs:185:47
  1. sothis stops running due to this error. When restarting sothis, it gives this error
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: PoisonError { .. }', /home/engn33r/.cargo/registry/src/index.crates.io-6f17d22bba15001f/blutgang-0.1.1/src/balancer/balancer.rs:228:15

It is possible the issue is with one of the RPC endpoints, since restarting blutgang without clearing the cache allows sothis to collect a few more blocks of data, and when sothis errors again (now with the error Error: RequestFailed("\"missing trie node 76f6d499f93c48793a8c638aa1a7144046758f6fb28995a51dff4e46297e2cd5 (path ) state 0x76f6d499f93c48793a8c638aa1a7144046758f6fb28995a51dff4e46297e2cd5 is not available\"")), I can keep restarting sothis and it slowly makes more and more progress collecting data on every restart.

Expected behavior
No error

Make the selection algo system more modular

Blutgang should be designed to be as modular as possible. RPC selection algos are a very important feature, and they should be made modular so that different users can customize them to fit their usecase.

[BUG] Clear cache not working

Describe the bug
The --clear flag doesn't appear to clear the cache

To Reproduce
Steps to reproduce the behavior:

  1. Run blutgang with 'blutgang
  2. Query data with sothis, for example sothis --mode fast_track --source_rpc http://127.0.0.1:3000 --contract_address 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 --storage_slot 80084422859880547211683076133703299733277748156566366325829078699459944778999 --origin_block 16790000 --terminal_block 16810000 --query_interval 100 --filename tripool_usdc.json
  3. Run blutgang --clear
  4. Run the same sothis command, observe instant responses from the cache

Expected behavior
The RPC servers in the config should be queried because the cache is cleared.

[RFC] Blutgang 0.2.0 Myrddin

This issue is for tracking and documenting any questions, comments and requests for Blutgang 0.2.0 Myrddin.

If you encountered a bug or any other type of undefined behaviour, please make a seperate GitHub issue.

Request timeout

Describe the bug
On a fresh install/run with the default config, I get the error Request timeout for all my requests to blutgang.

To Reproduce

> curl 'http://127.0.0.1:3000' \
  -X POST \
  -H 'Content-Type: application/json' \
  --data-raw '{"jsonrpc":"2.0","id":1,"method":"eth_chainId"}'

> {code:-32001, message:"error: Request timed out! Try again later..."}
  • Find warning message in logs:
Info: Checking RPC health... OK!
Info: Connection from: 127.0.0.1:55106
Info: Forwarding to: https://eth.merkle.io
Wrn: An RPC request has timed out, picking new RPC and retrying.
Info: Forwarding to: https://eth.merkle.io
Wrn: An RPC request has timed out, picking new RPC and retrying.
Info: Forwarding to: https://eth.merkle.io
Wrn: An RPC request has timed out, picking new RPC and retrying.
...
Info: Checking RPC health... OK!
  • Run
> curl 'https://eth.merkle.io' \
  -X POST \
  -H 'Content-Type: application/json' \
  --data-raw '{"jsonrpc":"2.0","id":1,"method":"eth_chainId"}'

> {"jsonrpc":"2.0","id":1,"result":"0x1"}

Expected behavior
blutgang shouldn't timeout since the rpc didn't return any error/timeout.

Specs:

  • OS: archlinux
  • Kernel: linux 6.1
  • Blutgang version: 0.3.0
  • Docker version: 25.0.2

Refactor TUI code

The current impl of TUI is very crude and difficult to work with. It should be refactored so that it supports displaying of responses, and more stats.

rewrite named params to blocknumber

Rewrite params like latest, safe, and finalized to their respective block numbers. A prerequisite for this is to replace serde, so we have to do less work later.

Docker container compiled at runtime

Describe the bug
The binaries of the docker image are compiled at runtime in the CMD line of the Dockerfile.
This works for development purposes, but goes against docker best practices and makes deployment to k8s or production not ideal.

To Reproduce
Just deploy the docker image to compose / k8s

Expected behavior
The image runs instantly, with the prebuilt rust binaries. Ideally, the image should be built with a multi-stage Dockerfile

Additional context

Head caching and health monitoring

Monitor the head of the chain, and have a live DB(hashmap that lives in memory?) that can be used to temporarily cache data at the head. The cached data could later be inserted into the historical data DB.

A prerequisite for this is RPC and chain health monitoring. RPC nodes should be removed from the rotation if they are erroring/falling behind. The chain needs to be monitored for reorgs if we are going to be inserting the live db into the historic one.

Cors error with MetaMask

Describe the bug
There seems to be a CORS error when http://localhost:3000 or http://127.0.0.1:3000 is added to MetaMask.

To Reproduce
Steps to reproduce the behavior:

  1. Run blutgang
  2. Try adding the designated endpoint to MetaMask
  3. Cry because CORS is taking away your joy from life.

Expected behavior
Expected to connect.

Specs:

  • OS: Tried through an Envoy proxy in k8s, running on Ubuntu 22.04 and via local M1 Pro Mac.
  • Kernel: latest for arm64
  • Blutgang version: 0.2.0

Blutgang options:
used the example default config with +1 RPC

  • DB mode: [HighThroughput]
  • DB compression: [no]
  • Cache capacity: [1000000000]
  • MA length: [20]
  • RPCs used: [llamanodes,merkle,geth]

Additional context
The issue is similar to when --http.corsdomain is not enabled in geth.

Metrics and JSON logging

Is your feature request related to a problem? Please describe.
Observability would be nice to have, in the form of metrics. The option to enable JSON logging too.

Describe the solution you'd like
I'd love to see metrics implemented here:

Key metrics could include:

  • latency by RPC
  • Errors by RPC
  • Requests by RPC
  • etc.

Describe alternatives you've considered
...

Additional context
Logging would also be nice to have in JSON format for search ability. Structured logs are nice.

[META] Blutgang 0.3.0 Garreg Mach

This is a meta issue for general questions and troubleshooting related to Blutgang 0.3.0 Garreg Mach.

If you have trouble updating, using, getting undefined behaviour, or anything minor that does not deserve it's own issue, you can report it here.

Batch request and ethereum etl does not work

Describe the bug
I am running latest version of blutgang(0.1.0) on the server and expose it on port 3000.
Config that I am using:

# Config for blutgang goes here
[blutgang]
do_clear = false # Clear the cache DB on startup
address = "0.0.0.0:3000" # Where to bind blutgang to
ma_length = 10 # Moving average length for the latency
sort_on_startup = true # Sort RPCs by latency on startup. Recommended to leave on.
health_check = true # Enable health checking
ttl = 300 # Acceptable time to wait for a response in ms
health_check_ttl = 12000 # Time between health checks in ms

# Sled config
# Sled is the database we use for our cache, for more info check their docs
[sled]
db_path = "./blutgang-cache" # Path to db
mode = "HighThroughput" # sled mode. Can be HighThroughput/LowSpace
cache_capacity = 1000000000 # Cache size in bytes. Doesn't matter too much as you OS should also be caching.
compression = false # Use zstd compression. Reduces size 60-70%, and increases CPU and latency by around 10% for db writes and 2% for reads
print_profile = false # Print DB profile when dropped. Doesn't do anything for now.
flush_every_ms = 24000 # Frequency of flushes in ms

# Add seperate RPCs as TOML tables
# DO NOT name an rpc `blutgang` or `sled`
[private]
url = "http://eth.hckn.dev:8545" # RPC url
max_consecutive = 5 # The maximum ammount of time we can use this rpc in a row.
max_per_second = 0 # Max ammount of querries per second. Doesn't do anything for now.

Following curl request to blutgang is success:

curl -X POST -i -H 'Content-type:application/json'  http://10.1.194.183:3000 -d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1}'

Following batch request failed:

curl -X POST -i -H 'Content-type:application/json'  http://10.1.194.183:3000 -d '[{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1},{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1", false],"id":1}]'

The error is:

curl: (52) Empty reply from server

Then I try to point our ethereum etl process to blutgang server:

"stream", "-p", "http://10.1.194.183:3000"

And got error on the ethereum side:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 802, in urlopen
    **response_kw,
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 454, in getresponse
    httplib_response = super().getresponse()
  File "/usr/local/lib/python3.7/http/client.py", line 1373, in getresponse
    response.begin()
  File "/usr/local/lib/python3.7/http/client.py", line 319, in begin
    version, status, reason = self._read_status()
  File "/usr/local/lib/python3.7/http/client.py", line 288, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 497, in send
    chunked=chunked,
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 845, in urlopen
    method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 470, in increment
    raise reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/util.py", line 38, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 802, in urlopen
    **response_kw,
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 454, in getresponse
    httplib_response = super().getresponse()
  File "/usr/local/lib/python3.7/http/client.py", line 1373, in getresponse
    response.begin()
  File "/usr/local/lib/python3.7/http/client.py", line 319, in begin
    version, status, reason = self._read_status()
  File "/usr/local/lib/python3.7/http/client.py", line 288, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

And from blutgang side I see following error in the log:

thread 'tokio-runtime-worker' panicked at 'cannot access key "id" in JSON array', /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.103/src/value/index.rs:102:18
thread 'tokio-runtime-worker' panicked at 'cannot access key "id" in JSON array', /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.103/src/value/index.rs:102:18
Checking RPC health... OK!
thread 'tokio-runtime-worker' panicked at 'cannot access key "id" in JSON array', /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.103/src/value/index.rs:102:18

To Reproduce
Steps to reproduce the behavior:

  1. Run blutgagng
  2. Make curl request
curl -X POST -i -H 'Content-type:application/json'  http://10.1.194.183:3000 -d '[{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1},{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1", false],"id":1}]'
  1. See error

Run ethereum etl and point it to blutgang server. See error.

Expected behavior
Batch request curl return result.

Specs:

  • OS: [e.g. Ubuntu 22.04]
  • Kernel: [Linux ip-10-1-193-21 5.19.0-1025-aws #26~22.04.1-Ubuntu SMP Mon Apr 24 01:58:03 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux]
  • Blutgang version: [0.1.0]
  • Arm64 server: [Architecture: aarch64]

Blutgang options:

  • DB mode: [HighThroughput]
  • DB compression: [no]
  • Cache capacity: [1000000000]
  • MA length: [10]
  • RPCs used: we are using private RPC node which we are hosting ourself

Bug: Subscriptions do not get resubscribed after node failiure

Describe the bug
Blutgang tracks which nodes get subscribed to what events, but in case of node failure, subscriptions can get dropped and not recover.

To Reproduce
Steps to reproduce the behavior:

  1. eth_subscribe
  2. have node fail
  3. technically still have an active subscription but never get a response after failure

Expected behavior
Resubscribe to new set of nodes

subscription fallbacks

Track what nodes are subscribed to what subscriptions, and if a node ever fails, we can resubscribe the event in Blutgang so users get uninterrupted service.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.