alethio / eth2stats-client Goto Github PK
View Code? Open in Web Editor NEWCommand line stats collector for Eth2Stats Ethereum 2 Network Monitor
Home Page: https://eth2stats.io
License: MIT License
Command line stats collector for Eth2Stats Ethereum 2 Network Monitor
Home Page: https://eth2stats.io
License: MIT License
and keeps resetting and spamming the nodes list
Is there any way to get eth2stats to connect to secure gRPC? I just got prysm set up to use a certificate, but then realized eth2stats won't work with it. :(
When built from source, token.dat is owned by root:root by default, which makes it not possible to run eth2stats as a regular user.
WARN Error processing HTTP API request method: GET, path: /beacon/head, status: 405 Method Not Allowed,
Running Nimbus client 0.6.6
Running eth2stats-client version v0.0.16+d729a1d
Description of warn message
stats is looking for process_resident_memory_bytes
Closest stat that Nimbus reports is
nim_gc_mem_bytes 4100096.0
nim_gc_mem_occupied_bytes 2733144.0
sqlite3_memory_used_bytes 2614336.0
Hey, the default port for nimbus RPC has changed to 9190 - great if the Add your node
example can be updated also!
time="2020-02-13T08:57:53Z" level=error msg="[prysm] rpc error: code = Unavailable desc = UNAVAILABLE:upstream request timeout"
time="2020-02-13T08:57:53Z" level=fatal msg="[telemetry] rpc error: code = Unavailable desc = UNAVAILABLE:upstream request timeout"
Docker image was alethio/eth2stats-client@sha256:a0b388460afcaf3a336fb47ef188eede0a5b92760a72fd0b1ec3e77025055f31
Hey, looks like nimbus is missing from the add your node
dropdown of client type - any chance it can be thrown in there?
Lighthouse nodes currently have their peer count overstated on eth2stats.io because eth2stats-client
ignores the peer states when reading /eth/v1/node/peers
. I think it should use the state
field to work out which peers are connected.
E.g. eth2-stats reports this number:
$ curl -s "http://localhost:5052/eth/v1/node/peers" | jq '.data | length'
244
when it should report:
$ curl -s "http://localhost:5052/eth/v1/node/peers" | jq '.data | map(select(.state == "connected")) | length'
50
For Prysm, eth2stats keeps logging each of the headSlot. Meanwhile for other clients, logging would quiet down and only logs on errors. I guess this might have to do with prysm connection being gRPC, and therefore the slightly different implementation.
prysm_eth2stats_1 | time="2020-08-08T12:00:06Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
prysm_eth2stats_1 | time="2020-08-08T12:00:06Z" level=info msg="[prysm] setting up beacon client connection"
prysm_eth2stats_1 | time="2020-08-08T12:00:06Z" level=warning msg="[prysm] no tls certificate provided; will use insecure connection to beacon chain"
prysm_eth2stats_1 | time="2020-08-08T12:00:06Z" level=info msg="[core] setting up eth2stats server connection"
prysm_eth2stats_1 | time="2020-08-08T12:00:06Z" level=info msg="[core] getting beacon client version"
prysm_eth2stats_1 | time="2020-08-08T12:00:08Z" level=info msg="[core] got beacon client version" version="Prysm/v1.0.0-alpha.19/ec21316efd11bce1a84fb713b0db5bf2d025f9b6. Built at: 2020-08-06 05:39:54+00:00"
prysm_eth2stats_1 | time="2020-08-08T12:00:08Z" level=info msg="[core] getting beacon client genesis time"
prysm_eth2stats_1 | time="2020-08-08T12:00:08Z" level=info msg="[core] beacon client genesis time" genesisTime=0
prysm_eth2stats_1 | time="2020-08-08T12:00:08Z" level=info msg="[core] awaiting connection to eth2stats server"
prysm_eth2stats_1 | time="2020-08-08T12:00:09Z" level=info msg="[core] getting chain head for initial feed"
prysm_eth2stats_1 | time="2020-08-08T12:00:09Z" level=info msg="[core] got chain head" headSlot=28495
prysm_eth2stats_1 | time="2020-08-08T12:00:09Z" level=info msg="[core] successfully connected to eth2stats server"
prysm_eth2stats_1 | time="2020-08-08T12:00:09Z" level=info msg="[core] setting up chain heads subscription"
prysm_eth2stats_1 | time="2020-08-08T12:00:09Z" level=info msg="[prysm] listening on stream"
prysm_eth2stats_1 | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26625
prysm_eth2stats_1 | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26627
prysm_eth2stats_1 | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26628
prysm_eth2stats_1 | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26629
prysm_eth2stats_1 | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26630
prysm_eth2stats_1 | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26631
prysm_eth2stats_1 | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26632
prysm_eth2stats_1 | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26635
prysm_eth2stats_1 | time="2020-08-08T12:01:19Z" level=info msg="[prysm] got chain head" headSlot=26636
prysm_eth2stats_1 | time="2020-08-08T12:01:19Z" level=info msg="[prysm] got chain head" headSlot=26637
# ... keeps logging "[prysm] got chain head" for each new head slot
lighthouse_eth2stats_1 | time="2020-08-08T10:01:44Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] setting up eth2stats server connection"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] getting beacon client version"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] got beacon client version" version=Lighthouse/v0.1.2-unstable/x86_64-linux
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] getting beacon client genesis time"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] beacon client genesis time" genesisTime=1596546008
lighthouse_eth2stats_1 | time="2020-08-08T10:02:23Z" level=info msg="[core] awaiting connection to eth2stats server"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:29Z" level=info msg="[core] getting chain head for initial feed"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:29Z" level=info msg="[core] got chain head" headSlot=26271
lighthouse_eth2stats_1 | time="2020-08-08T10:02:29Z" level=info msg="[core] successfully connected to eth2stats server"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:29Z" level=info msg="[core] setting up chain heads subscription"
lighthouse_eth2stats_1 | time="2020-08-08T10:02:29Z" level=info msg="[polling] polling for new heads"
# ... no further logs unless there's an error, while latest data continues to show on eth2stats.io correctly
teku_eth2stats_1 | time="2020-08-08T12:23:29Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
teku_eth2stats_1 | time="2020-08-08T12:24:44Z" level=info msg="[core] setting up eth2stats server connection"
teku_eth2stats_1 | time="2020-08-08T12:24:44Z" level=info msg="[core] getting beacon client version"
teku_eth2stats_1 | time="2020-08-08T12:24:46Z" level=info msg="[core] got beacon client version" version=teku/v0.12.3-dev-b40fd617/linux-x86_64/oracle_openjdk-java-14
teku_eth2stats_1 | time="2020-08-08T12:24:46Z" level=info msg="[core] getting beacon client genesis time"
teku_eth2stats_1 | time="2020-08-08T12:24:46Z" level=info msg="[core] beacon client genesis time" genesisTime=1596546008
teku_eth2stats_1 | time="2020-08-08T12:24:46Z" level=info msg="[core] awaiting connection to eth2stats server"
teku_eth2stats_1 | time="2020-08-08T12:24:46Z" level=info msg="[core] getting chain head for initial feed"
teku_eth2stats_1 | time="2020-08-08T12:24:47Z" level=info msg="[core] got chain head" headSlot=28624
teku_eth2stats_1 | time="2020-08-08T12:24:47Z" level=info msg="[core] successfully connected to eth2stats server"
teku_eth2stats_1 | time="2020-08-08T12:24:47Z" level=info msg="[core] setting up chain heads subscription"
teku_eth2stats_1 | time="2020-08-08T12:24:47Z" level=info msg="[polling] polling for new heads"
# ... no further logs unless there's an error, while latest data continues to show on eth2stats.io correctly
Logging quiets down after a successful chain heads subscription. Perhaps the chain head log gets demoted to "debug" level?
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] getting beacon client version"
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] got beacon client version" version=teku/v0.11.3-dev-44d9e02a/linux-aarch_64/-ubuntu-
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] getting beacon client genesis time"
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=error msg="[main] setting up: strconv.ParseInt: parsing "": invalid syntax"
it should be able to deal with this and retry until everything is ok
Current instructions show
git clone [email protected]:Alethio/eth2stats-client.git
which results in a permissions error due to not having write access to the repository.
Should be
git clone https://github.com/Alethio/eth2stats-client.git
Certificate expired 11/28/2020, 10:20:55 AM (Eastern Standard Time)
Would it be possible to provide a path via flag for the token.dat?
This would help us add eth2stats client in our deployments.
Sometimes the client crashes and on restart it keeps crashing with the following error message:
time="2020-02-10T15:42:17Z" level=error msg="[prysm] rpc error: code = ResourceExhausted desc = grpc: received message larger than max (8400136 vs. 4194304)"
time="2020-02-10T15:42:17Z" level=fatal msg="[telemetry] rpc error: code = ResourceExhausted desc = grpc: received message larger than max (8400136 vs. 4194304)"
This wasn't an issue before and I cannot seem to reproduce it consistently. I'm running the client as part of a docker-compose
stack, together with the Prysm beacon node:
version: '2'
services:
node:
image: gcr.io/prysmaticlabs/prysm/beacon-chain:latest
restart: always
stdin_open: true
tty: true
command: --datadir=/data --p2p-host-ip=94.103.153.169 --min-sync-peers=7 --p2p-max-peers=100 --deposit-contract=0x4689a3C63CE249355C8a573B5974db21D2d1b8Ef
ports:
- 4000:4000
- 13000:13000
volumes:
- '/opt/prysm/beacon:/data'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
ethstats:
image: alethio/eth2stats-client:latest
restart: always
command: run --v --eth2stats.node-name="morten.eth" --data.folder="/data" --eth2stats.addr="grpc.sapphire.eth2stats.io:443" --eth2stats.tls=true --beacon.type="prysm" --beacon.addr="node:4000" --beacon.metrics-addr="http://node:8080/metrics"
volumes:
- '/opt/prysm/stats:/data'
labels:
- 'com.centurylinklabs.watchtower.enable=true'
depends_on:
- node
links:
- node:node
watchtower:
image: containrrr/watchtower
restart: always
command: --label-enable --cleanup --include-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
running eth2stats gives the following error:
FATA[0000] [core] invalid node URL: localhost:5052
I run eth2stats with the following parameter:
./eth2stats-client run
--eth2stats.node-name="ethnode-c65e37164"
--eth2stats.addr="grc.summer.eth2stats.io:443"
--eth2stats.tls=true
--beacon.type="lighthouse"
--beacon.addr="localhost:5052"
--beacon.metrics-addr="http://localhost:8080/metrics"
lighthouse is up and running fine.
A short glimpse to the used ports with sudo lsof -i -P -n | grep LISTEN
gives the following output
lighthous 3962 ubuntu 30u IPv4 728231 0t0 TCP *:9000 (LISTEN)
lighthous 3962 ubuntu 35u IPv4 728237 0t0 TCP 127.0.0.1:5052 (LISTEN)
Do you have any ideas what it could be?
While investigating an OOM killing, I noticed eth2stats-client using 1.5GB of memory, which is a lot more than I've ever seen it use previously. I'm using the latest v1
node type with Lighthouse v0.3.0:
~/eth2stats-client/eth2stats-client run \
--eth2stats.node-name="$node_name" \
--data.folder ~/.eth2stats/data \
--eth2stats.addr="grpc.medalla.eth2stats.io:443" --eth2stats.tls=true \
--beacon.type="v1" \
--beacon.addr="http://localhost:5052" \
--beacon.metrics-addr="http://localhost:5054/metrics"
My only hunch about what it could be is the attestation pool. Lighthouse hoards a lot of attestations in its pool during periods without finality, e.g. I currently have 329822 attestations, which consume 188MB as a JSON response from /eth/v1/beacon/pool/attestations
. Still, this in an order of magnitude lower than the amount of memory eth2stats is consuming.
When I restarted the client its memory usage quickly jumped back up to around 1.3GB:
Version: eth2stats-client version v0.0.14+1455e8d
When I call the eth2stats.io on my iPhone (iOS 13.3.1), it gets stuck at the opening screen (see screen print). It no longer goes to the list of nodes. In the past (some weeks ago), this was working. I guess this was before the latest update. The desktop version continues to work.
My eth2stats for lighthouse don't seem to be updating
time="2020-07-17T19:04:56Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
time="2020-07-17T19:04:56Z" level=info msg="[core] setting up eth2stats server connection"
time="2020-07-17T19:04:56Z" level=info msg="[core] getting beacon client version"
time="2020-07-17T19:04:56Z" level=info msg="[core] got beacon client version" version=Lighthouse/v0.1.2-unstable/x86_64-linux
time="2020-07-17T19:04:56Z" level=info msg="[core] getting beacon client genesis time"
time="2020-07-17T19:04:56Z" level=info msg="[core] beacon client genesis time" genesisTime=1593433805
time="2020-07-17T19:04:56Z" level=info msg="[core] awaiting connection to eth2stats server"
time="2020-07-17T19:05:03Z" level=info msg="[core] getting chain head for initial feed"
time="2020-07-17T19:05:03Z" level=info msg="[core] got chain head" headSlot=131574
time="2020-07-17T19:05:03Z" level=info msg="[core] successfully connected to eth2stats server"
time="2020-07-17T19:05:03Z" level=info msg="[core] setting up chain heads subscription"
time="2020-07-17T19:05:03Z" level=info msg="[polling] polling for new heads"
That's all it does, no other logs, at all. But my Beacon-Node is running fine.
If I stop the docker and re-start it, it updates the headSlot to the latest headSlot, then sits there again.
I appear to be on the latest commit d1c921565596bbaf0218e24f96cf7e351685eaa5
With the recent medalla clock chaos, it became apparent how important the time of the node is. And since this is no secret to the outside world, why not share it on eth2stats to raise awareness, and alert users?
Possible design:
/clients
response data. 0 = not availabled = abs(server_time - client_time)
, d < 0.4
: green, 0.4<=d<1.0
: yellow, 1.0<=d<3.0
: orange, d>=3.0
: red.Questions:
Related cleanup:
I'm happy to implement all this if others like the idea, and agree on above design. Feedback welcome.
As title describes:
time="2020-11-18T03:10:30Z" level=warning msg="[metrics-watcher] failed to poll metrics: parse \"http://beacon-chain-prod-4:9090/metrics\": first path segment in URL cannot contain colon"
Getting error standard_init_linux.go:211: exec user process caused "exec format
[pi@archlinux ~]$ docker run --restart always --network="host" --name eth2stats-client -v ~/.eth2stats/data:/data alethio/eth2stats-client:latest run --eth2stats.node-name="archlinuxarm" --data.folder="/data" --eth2stats.addr="grpc.topaz.eth2stats.io:443" --eth2stats.tls=true --beacon.type="prysm" --beacon.addr="localhost:4000" --beacon.metrics-addr="http://localhost:8080/metrics"
standard_init_linux.go:211: exec user process caused "exec format error"
[pi@archlinux ~]$ uname -a
Linux archlinux 5.4.38-1-ARCH #1 SMP PREEMPT Wed May 6 11:05:57 MDT 2020 aarch64 GNU/Linux
for example this char is not allowed ⟠
i'm using prysm alpha 14
Hi,
I added my prysm node (alpha17) to eth2stats this morning. Later in the afternoon it dropped of the stats board so I looked into the tmux session running my node and noticed it is throwing errors, because it can't open any file handles anymore. I also saw that it's logging out "New gRPC connection to beacon node" with a new port every 12 seconds. I simply rebooted the device and everything was back to normal. I investigated what these port are about and saw that all these connections are ESTABLISHED when I look into lsof. After stopping the container it immediately stops adding new connections. Must be because of the pre-genesis condition. I will turn the container back on after genesis otherwise it will crash the node again.
Hi, running the Eth2stats client as a DAppNode package and the Prysm beacon chain on the Medalla testnet with the following config parameters:
BEACON_ADDR prysm-medalla-beacon-chain.dappnode:4000
BEACON_METRICS http://prysm-medalla-beacon-chain.dappnode:8080/metrics
BEACON_TYPE prysm
Returns the following error:
time="2020-08-07T08:55:20Z" level=info msg="Could not load config file. Falling back to args. Error: Config File "config" Not Found in "[/]"" module=main
time="2020-08-07T08:55:20Z" level=debug msg="[main] Debug mode"
time="2020-08-07T08:55:20Z" level=info msg="[prysm] setting up beacon client connection"
time="2020-08-07T08:55:20Z" level=info msg="[core] setting up eth2stats server connection"
time="2020-08-07T08:55:20Z" level=debug msg="[core] looking for existing token"
time="2020-08-07T08:55:20Z" level=warning msg="[core] token file not found; will register as new client"
time="2020-08-07T08:55:20Z" level=info msg="[core] getting beacon client version"
time="2020-08-07T08:55:20Z" level=info msg="[core] got beacon client version" version="Prysm/v1.0.0-alpha.19/0d118df0343bf0e268e9fb4f2d5eb60156519c11. Built at: 2020-08-05 14:27:07+00:00"
time="2020-08-07T08:55:20Z" level=info msg="[core] getting beacon client genesis time"
time="2020-08-07T08:55:20Z" level=info msg="[core] got beacon client genesis time" genesisTime=1596546008
time="2020-08-07T08:55:20Z" level=info msg="[core] awaiting connection to eth2stats server"
time="2020-08-07T08:55:21Z" level=error msg="[main] setting up: eth2stats: failed to connect: rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8""
time="2020-08-07T08:55:33Z" level=info msg="[main] retrying..."
Currently the log entries do not show a timestamp. It would be nice to see when a log-entry is actually printed. This way we have a "time reference".
WARN[2639] [core] ChainHead request was skipped due to rate limiting
INFO[2641] [prysm] got chain head headSlot=67504
INFO[2643] [prysm] got chain head headSlot=67505
INFO[2644] [prysm] got chain head headSlot=67506
INFO[2646] [prysm] got chain head headSlot=67507
INFO[2648] [prysm] got chain head headSlot=67508
INFO[2650] [prysm] got chain head headSlot=67509
INFO[2652] [prysm] got chain head headSlot=67510
INFO[2653] [prysm] got chain head headSlot=67511
INFO[2655] [prysm] got chain head headSlot=67512
INFO[2657] [prysm] got chain head headSlot=67513
INFO[2659] [prysm] got chain head headSlot=67514
INFO[2659] [prysm] got chain head headSlot=67515
WARN[2659] [core] ChainHead request was skipped due to rate limiting
INFO[2661] [prysm] got chain head headSlot=67516
INFO[2662] [prysm] got chain head headSlot=67517
WARN[2662] [core] ChainHead request was skipped due to rate limiting
INFO[2664] [prysm] got chain head headSlot=67518
Spadina launches 29-September, eth2stats support would be helpful.
teku_stats_1 | time="2020-08-17T23:50:04Z" level=warning msg="[metrics-watcher] failed to poll metrics: text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1 | time="2020-08-17T23:50:04Z" level=info msg="[metrics-watcher] querying metrics"
teku_stats_1 | time="2020-08-17T23:50:04Z" level=error msg="[metrics-watcher] reading text format failed:text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1 | time="2020-08-17T23:50:04Z" level=warning msg="[metrics-watcher] failed to poll metrics: text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1 | time="2020-08-17T23:50:05Z" level=info msg="[metrics-watcher] querying metrics"
teku metrics response values
# TYPE validator_attestation_publication_delay summary
validator_attestation_publication_delay{quantile="50%",} 0.0
validator_attestation_publication_delay{quantile="95%",} 0.0
validator_attestation_publication_delay{quantile="99%",} 0.0
validator_attestation_publication_delay{quantile="100%",} 0.0
It seems during the start-up, if the beacon node is not yet ready to serve traffic, eth2stats
will crash.
This causes a problem when running both of them in a kubernetes pod together. The pod starts up, starting a beacon node, and eth2stats
. The beacon node takes a little time to be able to start serving traffic, but eth2stats
attempts immediately, fails, and crashes. This causes all the containers to be restarted, and we get into an infinite loop of failure.
Would it make sense to have some retry/back-off mechanism at startup to be more forgiving of the beacon node not yet being ready?
Not sure if I'm missing something but trying to run the eth2stats client inside a docker-compose along a working beacon (reachable at beacon:4000
), I end up with the following error on startup:
prysm_eth2stats | time="2020-08-17T09:02:05Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
prysm_eth2stats | time="2020-08-17T09:02:05Z" level=fatal msg="[core] invalid node URL: \"beacon:4000\""
Compose block:
stats:
image: alethio/eth2stats-client:latest
restart: always
container_name: prysm_eth2stats
hostname: prysm_eth2stats
environment:
- TZ=${TZ}
volumes:
- "./volumes/eth2stats/data:/data:rw"
command:
- run
- --eth2stats.node-name="mgcrea-prysm-01"
- --data.folder="/data"
- --eth2stats.addr="grpc.medalla.eth2stats.io:443"
- --eth2stats.tls=true
- --beacon.type="prysm"
- --beacon.addr="beacon:4000"
- --beacon.metrics-addr="http://beacon:8080/metrics"
depends_on:
- beacon
Getting the follow error messages. I can confirm the node shows up on the site, but indeed the peer count is empty.
INFO[0000] [metrics-watcher] Started polling metrics
INFO[0000] [metrics-watcher] querying metrics
INFO[0000] [prysm] listening on stream
ERRO[0000] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
ERRO[0000] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
ERRO[0012] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
ERRO[0012] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
ERRO[0024] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
ERRO[0024] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference
Trying to build the repo on a NANOPC T4. This is the output:
root@ethnode-9538ecf1:~/eth2stats-client# make build
go build -ldflags "-X main.buildVersion="v0.0.3-872141e""
main.go:7:2: cannot find package "github.com/alethio/eth2stats-client/commands" in any of:
/usr/lib/go-1.10/src/github.com/alethio/eth2stats-client/commands (from $GOROOT)
/root/eth2stats-client/src/github.com/alethio/eth2stats-client/commands (from $GOPATH)
Makefile:4: recipe for target 'build' failed
make: *** [build] Error 1
Not really familiar with go. Do you git any hints?
Hi Alethio team,
@prysmaticlabs has announced a new testnet, Onyx.
Could we have a new tab and endpoint for Onyx nodes?
Announcement: https://medium.com/prysmatic-labs/introducing-the-onyx-testnet-6dadbd95d873
Looks like Lighthouse changed their HTTP API
sigp/lighthouse#1434
I think that it now breaks the eth2stats-client
WARN Error processing HTTP API request method: GET, path: /beacon/head, status: 405 Method Not Allowed, elapsed: 271.068µs
Although Medalla testnet is still alive, it has been deprecated and no longer maintained. Any plan to switch to the latest testnet Pyrmont?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.