Giter Club home page Giter Club logo

farm-proxy's People

Contributors

lkr-braiins avatar spigi42 avatar vitficl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

farm-proxy's Issues

FP Stops after 5 minute

Hi
I tested FP on small scale and it was working ok with no problems but when I tried to test it on two containers with 600 miners it keeps stopping after about 5 to 10 minute
fp_quits.log

Hashrate split

How does hashrate splitting work? I've set everything up, but the hashrate goes to only one pool

Screenshot 2022-05-23 at 11 26 37

Screenshot 2022-05-23 at 11 26 54

FarmProxy Shows Workers as Connected but not individual hashrates

This may be considered a feature request, but when configured with individual workers, FarmProxy fails to show the individual worker performance: accepted shares, rejected shares, average hashrate, etc.

A view of a stacked graph and statistics similar to what would be seen on the pool manager side (such as on the Braiins worker dashboard) would be ideal, so local statistics could be viewed per worker, without needing to go to the web.

Prometheus volume size

Hello,

After a while I could not login to Grafana service even the login was right. In a log file I found that HW disk is full. Prometheus service service has filled 100 % of disk capacity.

In the docker compose file there was limit to automatically delete old data:

 - '--storage.tsdb.retention.time=200h'

However, disk cap was exceeded before it could erase something.

It would be nice to have an info about data size per miner per day. It would help to do the math for HDD size required.

I solved the issue by stopping the Prometheus and Grafana service. Erase prometheus volume and docker-compose up.

The issue did not affect the mining itself. Farm-proxy service was OK.

FarmProxy Fails to proxy Stratum v1 Workers to Stratum v2 Pool

Expected Behavior: configuring FP with a Stratum V2 endpoint (Braiins Pool) should permit Stratum V2 mining by all workers behind the proxy, regardless of whether they're using Stratum V1 or Stratum V2, facilitating standardizing a mixed mining environment onto Stratum V2 or operation of legacy miners on modern pool infrastructure without the need to operate against two pool endpoints (and the associated deficiencies of using Stratum V1).

Observed Behavior:
Err(NotPresent) 2023-01-16T02:40:17.487400Z INFO farm_proxy: Welcome to Farm Proxy 22.11 (commit-id: f7976df13b2bb3d2a625b30c3fd3077d8e6a04cf, is-dirty=false, additional-commits=false), rev=f7976df13b 2023-01-16T02:40:17.487446Z INFO farm_proxy: Using configuration file: /conf/farm_proxy.toml Error: Invalid configuration: /conf/farm_proxy.toml'

Caused by:
Stratum V2 is not supported URL for target. Use Stratum V1 address`

Steps to reproduce: Connect a Stratum v1 worker to Farm Proxy via Stratum V1, configure Farm Proxy to mine against a pool's Stratum v2 URL & credentials, observe that miner remains in "waiting for work" state and "docker logs farm-proxy" throws the above error.

If not yet implemented, how close is this functionality to being complete? If it is implemented, does something specific need to be done in the toml file to enable it successfully?

bos_referral_code error

When set bos_referral_code getting this error and farm wont start:

Error: Invalid configuration: `/conf/farm_proxy.toml'

Caused by:
    0: Unable to parse source string as toml
    1: TOML parse error at line 6, column 1
         |
       6 | bos_referral_code = "xxxxxxxxxxxxxxxx"
         | ^^^^^^^^^^^^^^^^^
       unknown field bos_referral_code, expected one of name, port, extranonce_size, validates_hash_rate, use_empty_extranonce1, submission_rate, slushpool_bos_bonus, braiinspool_bos_bonus

My config is:

[[server]]
name = "S1"
port = 3338
braiinspool_bos_bonus = "xyz"
bos_referral_code = "xxxxxxxxxxxxxxxx"

E PROM BYPASS

S 17+ ANTMINER 3 HASHBOARDS 65 ASICS CAN NOT FLASH EPROM CAN I BYPASS TO RUN?? 3 BOARDS ? CAN I ADD CODE TO BYPASS ON KERNEL LOG OR MAIN LOG >>?? THANK YOU ITS BEEN 4 DAYS SO FAR WITH THE REPAIR DONE AND TRYING TO FLASH WITH 2 DIFFERENT CODE EDITORS THAN K YOU ?

MRR Service (Proxy Server) issues

  1. If you put in config file 2 targets where the Primary target is MiningRigRental Service (another proxy server) and the second target is some kind of pool, than the first target does not work at all (tried any suggested ports+urls from MRR config on there web site), also connection error appears in the docker log, and rigs jumping to the 2nd target (pool) and work there with no errors and issues
  2. When you put in config file only 1 target MiningRigRentals Service, than it connected and works, but same error is still present in the docker logs
  3. When you put in config file 2 targets: 2 MRR different workers and different servers, both targets are also unreachable (any url+port does not help)
    In all 3 scenario playing with extranonce parameter does not succeed at all, but without extranonce parameter even single target MRR Service does not work. Single target MRR Service works only if you put following config:
    .....Server:
    extranonce_size = 3
    use_empty_extranonce1 = true
    .....Target:
    extranonce_size = 4

This is happen because the MRR service it is not a pool it is another "Proxy Server" and here is the point, problem
Also in the Grafana Debug Dashboard it has been noted something with a lot of rejected shares when you use to targets in the config, considering when one of the target is another proxy server
Please review this issue,
Maybe need to play with some timings/reconnect time out, etc, as MRR service quite long pickup hash rate compare to the direct pool target, and simply Proxy jumps to the pool and no waiting MRR, because on MRR in the beginning little appear small hash and status connected, and than goes hash to 0, and some time still remain status connected...
Attached print screen of the parameters, how get access here?
Screenshot from 2023-06-14 04-39-44
Or maybe I am doing config wrongly?

P.S.: admin of the MRR has reported that their proxy server extranonce is not using, because it depends of which pool you connect during rigs idle and renters pool during rent phase.

XEC (ecash)

Shows wrong hashrate if connected to viabtc pool, and a lot of invalid hashrate and 0 downstream connections, but on viabtc pool hash is ok, workers are present.
On solopool.org is everything fine in Grafana: hash, downstream connections, pool hash is also ok, no invalid hash

Coonections issue in the log file

In the log is time to time appear:
WARN...infra::probing: Cannot connect to the remote end
ERROR...target_quality: ConnectionError found on endpoint=stratum+tcp.......
But on the pool is everything ok, connected, hashrate is good, all ok:
Miners: Bitmain S19j 104 Th/s, stock firmware
If I use 2 targets, than same error appear for both targets, but second target does not work properly: it is connected on the pool for few minutes and disappear... I used hr_weight parameter = 95 for the first target & hr_weight parameter = 5, but 1st target is ok may be I am doing wrong configuration in config file??
As I understood that you cannot split hashrate only can split miners, ok. I have 9 miners connected to the proxy with the same ip & port address, how to connect each asic separately in order to split hashrate?

where to change port 8080

ERROR farm_proxy::http_api: Cannot bind address for monitoring 0.0.0.0:8080
2023-01-07T18:26:01.985816Z ERROR warp::server: error binding to 0.0.0.0:8080: error creating server listener: Address already in use (os error 98)
2023-01-07T18:26:01.985866Z INFO farm_proxy::http_api: Monitoring running on 0.0.0.0:8080

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.