Giter Club home page Giter Club logo

pooler's People

Contributors

anomit avatar atiqgauri avatar chaitanyaprem avatar getjiggy avatar irfan-ansari-au28 avatar jatinj615 avatar swagftw avatar swarooph avatar xadahiya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pooler's Issues

Update rpc helper to support state override and block identifier

Is your feature request related to a problem?
This feature allows the making of state override calls at specific block heights ie, overloading an address with UniV3helper contract to grab tick data.

Describe the solution you'd like
The ability to make state override calls at specific block heights.

Describe alternatives you've considered
Deploying the contract is inefficient when potential changes can be added down the road to better support use cases taking advantage of this feature.

Enable past data computations (if archive node is present)

The ability to process past data will come in handy at later stages of our protocol, requiring access to past epoch blocks via an archive node. To avoid issues, it would be best to just enable historical computations if and only if the archive RPC URL is configured.

Enhancing Pooler Efficiency for Smaller Epoch Sizes

Problem Description:
At present, the pooler functionality exhibits optimal performance only when employed with larger epoch sizes, owing to the substantial computational load involved in aggregate calculations. Utilizing the pooler with an epoch size set to 1 triggers a domino effect of issues, commencing with the failure of the 24-hour trade volume aggregate computation, which consequently disrupts the entire process.

Proposed Solution:
The root cause of this failure can be attributed to the fact that the calculation of the 24-hour trade volume aggregate relies heavily on prior finalized epoch data in order to perform efficient computations (derived from the previous epoch). However, the time span of 12 seconds (as observed with ETH), within which the last epoch is expected to finalize all operations, is inadequate. As a result, for each epoch, a complete recalculation from scratch is attempted. This entails retrieving an extensive dataset either from the RPC or the local Cache, contingent on data availability in the cache.

Consequently, an excessive volume of RPC requests is generated, leading to a surge in Rate Limit errors. This, in turn, triggers a chain reaction of failures, rendering the system entirely non-functional.

To address this, a revamped approach is recommended for the computation of the 24-hour trade volume aggregate. This new approach should eliminate the dependence on finalized calculations and instead harness the locally stored base snapshots submitted by the snapshotter for the most recent unfinalized epochs. Such a redesign will significantly enhance the system's overall efficiency, enabling seamless support for an epoch size of 1.

This enhancement holds the promise of resolving the existing challenges and significantly improving the functionality of the pooler across different epoch sizes.

Simplify snapshotter architecture for faster performance

In current snapshotter architecture, if building a snapshot fails for some reason, it is queued and tried again with next epoch and so on multiple time (there is a max limit) until it succeeds or reaches that limit.

With the new architecture and even with current offchain-consensus architecture, since there are multiple snapshotters snapshotting for each project and there is a small submission window for each epoch, there is no need to retry snapshot generation after the submission window is closed as it doesn't serve any other useful purpose and missing data can be handled by self healing.

Simplifying the architecture and removing retry mechanism should make snapshotting much faster and simpler, reducing load on RPC node as well.

Integrate IPFS and web3 storage uploads

Is your feature request related to a problem?
Presently, snapshotters send the entire contents of each snapshot to the payload commit service in audit protocol over RabbitMQ. This is a huge overhead considering that this can take place for thousands of projects per epoch. This often causes high resource usage when there is a burst of large snapshots that are computed.

Describe the solution you'd like
Once the snapshots are computed and built, upload them from within the snapshot and aggregation workers itself.

Describe alternatives you've considered
NA

Additional context
NA

New refactored RPC helpers that support web3.py v6 have introduced Eth price fetch failures

Describe the bug

Bug :
| ERROR | RPC ERROR failed to fetch ETH price, error_msg:list index out of range| {'module': 'Powerloom|Snapshotter|SnapshotUtilLogger'} pooler_1 | 0|process-hub-core | April 18, 2024 > 09:36:49 | ERROR | RPC ERROR failed to fetch ETH price, error_msg:list index out of range| {'module': 'Powerloom|Snapshotter|SnapshotUtilLogger'} pooler_1 | 0|process-hub-core | April 18, 2024 > 09:36:49 | ERROR | RPC ERROR failed to fetch ETH price, error_msg:list index out of range| {'module': 'Powerloom|Snapshotter|SnapshotUtilLogger'}

Eth price fetch failures cause a complete halt on all snapshotting, this bug is uncommon but can persist for multiple epochs if left unchecked.

To Reproduce

Steps to reproduce the behavior:

  1. Remote instance to be used - 143.198.177.25
  2. cd powerloom
  3. cd deploy
  4. ./build-dev.sh

Expected behavior
After an epoch is released, snapshot building should commence and this error should pop up (it may require a few epochs)
snapshot submissions may fail but all those errors can be ignored as a complete setup is required to run this e2e

Proposed Solution
Appropriate exception handling and retry strategies to ensure the fetches finally go through as often as possible so as to not block the snapshot builds because of their dependency on ETH to USD price conversion.

Caveats
Working under the assumption the above error was introduced by the new RPC helpers that support web3.py v6

Additional context
Add any other context about the problem here. (e.g. OS, Python version, ...)

Improve Aggregation Calculation Performance by Modifying Event Dependency

Is your feature request related to a problem?

Right now, since aggregation calculation depends on underlying snapshots to get finalized, this leads to a longer wait time for aggregation calculations to finish.

Describe the solution you'd like

Instead of relying on the SnapshotFinalized event, snapshotters should be designed to wait for the SnapshotSubmitted event associated with their specific address. This modification will enable snapshotters to proceed with the necessary calculations promptly.

New Protocol: Master node RabbitMQ enters an infinite loop (uncommon) during execution

Describe the bug

Rare case - after a prolonged full node run, rabbitMQ entered an infinite loop and kept on spawning tasks.

To Reproduce

Hasn't been replicated yet

Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Proposed Solution
Provide a solution to the issue if any.

Caveats
Describe any suspected impact/assumptions for the proposed solution.

Additional context
Add any other context about the problem here. (e.g. OS, Python version, ...)

Add specific branch for aave use case

Is your feature request related to a problem?
The aave use case codebase does not have an aave specific branch for pooler.

Describe the solution you'd like
A pooler branch for aave should be created to keep any necessary aave specific changes separate from other use cases (at least while testing).

Describe alternatives you've considered
Aave specific code changes could be pushed to the main branch, but it would be cleaner and easier to maintain by keeping a separate branch.

Improve Pooler Core API for generic use cases and new architecture

Pooler's Core API is currently limited in its functionality, providing users only with access to aggregated and indexed data specific to UniswapV2 use cases.
This restriction hinders the flexibility and scalability, limiting its ability to meet the evolving needs across a variety of use cases. To address this issue, it is necessary to develop a generic API that supports multiple use cases, allowing for increased customization and expanded functionality.

Related to #8

Internal API for snapshot processing status per epoch

Is your feature request related to a problem?
Beginning from the release of an epoch and submission of a snapshot against it, to its finalization on Powerloom protocol, it goes through a sequence of state transitions as detailed below. Its status, at the moment, is non-trivial to diagnose from the available APIs exposed by the snapshotter implementations.

  • EPOCH_RELEASED – epoch is released from the protocol state smart contract for snapshotters to detect and begin work
  • PRELOAD – preloaders are executed for snapshot building workers to extract data according to the snapshotter specific modeuls (uniswapv2 in case of Pooler)
  • SNAPSHOT_BUILD – the snapshot builders as configured in projects.json are executed. Also refer to the case study of the current implementation of Pooler for a detailed look at snapshot building for base as well as aggregates.
  • SNAPSHOT_SUBMIT_PAYLOAD_COMMIT - once a snapshot is built, it is propagated to the payload commit service in Audit Protocol for further submission to the protocol state contract.
  • RELAYER_SEND - Payload commit service has sent the snapshot to a transaction relayer to submit to THE protocol state contract

* SNAPSHOT_SUBMIT_PROTOCOL_CONTRACT - The snapshot submission transaction from the relayer to the protocol state smart contract was successful and a SnapshotSubmitted event was generated

At the moment, SnapshotSubmitted event delivered to processor distributor is a locally mocked event from payload commit service in Audit Protocol. This makes SNAPSHOT_SUBMIT_PROTOCOL_CONTRACT actually not be connected to a state transition corresponding to a protocol state contract event emission, and hence is omitted for this release. A separate state transition may be tracked in the future for such purpose.

  • SNAPSHOT_FINALIZE - Upon reaching consensus, the finalized snapshot accepted against an epoch is published via a SnapshotFinalized event.

Describe the solution you'd like
Expose an API endpoint on the lines of /internal/snapshotter/epochProcessingStatus which will return the status of processing of snasphots against configured project types at every state transition. The response set should be paginated.

{
    "items": [
      {
        "epochId": 43523,
        "transitionStatus": {
          "EPOCH_RELEASED": {
            "status": "success",
            "error": null,
            "extra": null,
            "timestamp": 1692530595
          },
          "PRELOAD": {
            "pairContract_pair_total_reserves": {
              "status": "success",
              "error": null,
              "extra": null,
              "timestamp": 1692530595
            },
          },
          "SNAPSHOT_BUILD": {
            "aggregate_24h_stats_lite:35ee1886fa4665255a0d0486c6079c4719c82f0f62ef9e96a98f26fde2e8a106:UNISWAPV2": {
              "status": "success",
              "error": null,
              "extra": null,
              "timestamp": 1692530596
            },
          },
          "SNAPSHOT_SUBMIT_PAYLOAD_COMMIT": {

          },
         "RELAYER_SEND": {

         },
        "SNAPSHOT_FINALIZE": {

        },
      },
    }
   ],
   "total": 30,
   "page": 1,
   "size": 10,
   "pages": 3
}

Describe alternatives you've considered
The other alternatives to an aggregated API are cumbersome and deal with parsing and connecting several transient datapoints maintained in the Redis cache.

Additional context
NA

Pooler memory spike issue

The master node instances have been experiencing large spikes in memory usage causing the redis service to crash.

Add snapshotter identity check to devnet branch

We need to update the snapshotter_id_ping.py script to enable Snapshot slot checks. Currently, this feature is disabled in the testnet, but we want to make sure our system accurately checks if Snapshotting is allowed for the given instance ID.

  • The script should check if Snapshotting is allowed for the given instance ID.

  • Based on the result of the query, set the active status key in Redis.

  • Ensure the script handles any potential exceptions and exits with the correct status code.

  • #91

Last Finalized Snapshot Detection and Alert System

Is your feature request related to a problem?
There is no system setup to notify when a master node has not been processing snapshots or has gone down.

Describe the solution you'd like
The /health endpoint of the core-api should periodically query the protocol state contract for the current epoch and compare the epochId against active projects to detect how many epochs have passed since the last finalization. If any sampled projects have not finalized within a reasonable time (20-30 epochs?), then pooler will send a Slack alert notifying of a failure to finalize.

Describe alternatives you've considered
It might be worth exploring an external monitoring service that tracks the status independently since finalization requires submissions from multiple master nodes. However, it would be possible to get additional information for specific nodes if the monitoring is done internally.

GenericProcessorSnapshot compute definition has incorrect redis param name

The current definition of GenericProcessorSnapshot in snapshotter/utils/callback_helpers.py defines the computing interface with the redis parameter as redis, but it should be redis_conn instead. That's what tx_worker and all computes are using.

Incorrect

    @abstractmethod
    async def compute(
        self,
        epoch: PowerloomSnapshotProcessMessage,
        redis: aioredis.Redis,
        rpc_helper: RpcHelper,
    ):
        pass

Correct

    @abstractmethod
    async def compute(
        self,
        epoch: PowerloomSnapshotProcessMessage,
        redis_conn: aioredis.Redis,
        rpc_helper: RpcHelper,
    ):
        pass

Strick check on snapshotter identity on protocol state contract

Is your feature request related to a problem?

Presently, the snapshotter implementations go ahead and begin submitting snapshots even if their identity is not added or activated on the protocol state smart contract.

Even though their snapshot submissions will fail the logical checks on the protocol state implementation, it would still result in a large number of unnecessary transactions being sent out and is wasteful of resources on the end of the snapshotter as well as the peers that contribute to running the protocol state chain.

Describe the solution you'd like

Introduce two level checks

  • During startup, do a sanity check and exit with an error code 1, if the configured snapshotter identity us not found on the protocol smart contract as an allowed snapshotter

  • While a snapshotter is operational, run periodic checks for the active status of the configured snapshotter identity and in case it is found to be inactivated or removed, it should cease submitting snapshots and exit with an error code 1.

Describe alternatives you've considered
NA

Additional context
NA

UniV3 pair finder improvement for getting price

The uniV3 use case uses either a token/ETH or token/Stablecoin pool in order to calculate the price of a given token. There can be multiple pools for any given pair depending on what fee amount was used to create the pool (100, 500, 3000, or 10000). Currently, the univ3 compute will use the first pair that it finds, but that is not always the best pool to use.

The price calculation should gather all of the available pools and then compare their liquidity values, taking the pool with the largest liquidity for a more accurate price.

Update pair addresses for all Uniswap variations

The current list of pair addresses used in UniswapV2, Sushiswap, and Quickswap are outdated and include inactive or low-activity pairs.
To enhance the user experience and data accuracy, we need to curate and create a new list of top pairs for all three platforms.

Related to #8

Remove dependency on audit protocol for a project's last finalized epoch ID

Is your feature request related to a problem?
Presently the endpoint offered by core_api.py ,

@app.get('/last_finalized_epoch/{project_id}')

depends on a project's finalized epoch ID being set by audit-protocol's payload commit service in the local redis cache. During the first run of a freshly spun up node, this would cause none of the project's (base snapshot as well as aggregate snapshots projects) last finalized epoch ID to be returned.

pooler/pooler/core_api.py

Lines 186 to 219 in 9b1473f

@app.get('/last_finalized_epoch/{project_id}')
async def get_project_last_finalized_epoch_info(
request: Request,
response: Response,
project_id: str,
rate_limit_auth_dep: RateLimitAuthCheck = Depends(
rate_limit_auth_check,
),
):
"""
This endpoint is used to fetch epoch info for the last finalized epoch for a given project.
"""
if not (
rate_limit_auth_dep.rate_limit_passed and
rate_limit_auth_dep.authorized and
rate_limit_auth_dep.owner.active == UserStatusEnum.active
):
return inject_rate_limit_fail_response(rate_limit_auth_dep)
try:
# get project last finalized epoch from redis
project_last_finalized_epoch = await request.app.state.redis_pool.get(
project_last_finalized_epoch_key(project_id),
)
if project_last_finalized_epoch is None:
response.status_code = 404
return {
'status': 'error',
'message': f'Unable to find last finalized epoch for project {project_id}',
}
project_last_finalized_epoch = int(project_last_finalized_epoch.decode('utf-8'))

Describe the solution you'd like
In case of a local state of a project's last finalized epoch ID not being present, ensure the same is fetched from the protocol state contract on the anchor chain.

Describe alternatives you've considered
NA

Additional context
NA

Uniswap v3 completion

Base Snapshots

Pair reserves snapshot

  • Token0 Price
  • Token1 Price
  • TVL (different from reserves)
  • TVL in USD? (needed only if TVL is not in USD)

Trade Volume Snapshot (Mint and burn are slightly different for v3)

  • Token0 trade volume
  • Token0 trade volume (USD)
  • Token1 trade volume
  • Token1 trade volume (USD)
  • Total Fees (USD)
  • Total Trades (USD)

Aggregate Snapshots

  • 24h trade volume (same as v2)
  • 7d trade volume (save as v2)
  • Top pairs (same as v2)
  • Top tokens (same as v2)

Aggregates for Graphs (to be done later)

  • TVL over time (continuous aggregate of total TVL over time (can be from top tokens or top pairs)) [Just add graph data points on each epoch]
  • Volume over time (continuous aggregate of total volume over time (can be from top tokens or top pairs)) [Just add graph data points on each epoch]
  • Token graphs (Volume, TVL, Price) - Can come from aggregating Top Tokens snapshots over time
  • Pair graphs (Volume, Liquidity, Fees) - Volume and Fees are simple enough can be done just like token graphs. Liquidity spread graph is a little tricky and will need a bit of research for calculation. Once base snapshot is there then we can just listen for Add/Remove events and update values accordingly

Building Uniswap exact dashboard replica won't be enough we need to make our dashboard better and more details so people can start using our dashboard. Will help in our vampire attack later.

  • Liquidity Spread and TVL calculation are parts of the same problem

Node recovery and diagnostics

Is your feature request related to a problem?

Presently, the pre-testnet and testnet participants have been running into a recurring issue on their hosted node instances where the virtualized host OS kills off long running processes silently, specifically on the pooler repo. This has been affecting the snapshot builder processes spawned from the process_hub_core which results in no epochs being processed and no snapshots being submitted once the processes get killed off.

The existing respawn feature on crashed worker processes does not kick in since such processes are terminated by delivering a SIGKILL signal which can not be handled.

The current diagnostics available at the moment are the following

which do not give an immediate actionable insight or hook to restore the node to a working state.

At the moment someone has to monitor the snapshotter submissions on the latest epochs on the consensus dashboard against their node's identity to be aware of such an issue.

Describe the solution you'd like

  • Worker processes should be respawned automatically based on internal health checks performed by the core on itself
  • Improve the CLI used to interact with process_hub_core
    • better view of node health
    • command for a clean respawn of the worker processes

Describe alternatives you've considered

A quickfix would have been to add auto-restarts in a cron script or timed shell script, but such an approach will be non-deterministic and violates the clearly defined state transitions of the protocol that the node participates.

Additional context

NA

Snapshotter V1 design

Top level issue to track Snapshotter V1 development (pooler modified)

Snapshotter

Snapshotter agent has the following responsibilities

  • Snapshot Based on Data
  • Aggregation over projects
  • Giving access to snapshotted data using core API

Current State and Todo

Snapshotter will only snapshot the current state and has nothing to do with protocol state or DAG chain building
Work is already going on here (onchain_pooler branch) according to the architecture.
Things Remaining -

  • Aggregation design and testing
  • Aggregation logic implementation and testing
  • Make modifications for Epoch and other Event structure updates needed (todo)
  • Testing and bug fixing (in progress)

Make all functionality (snapshotting and aggregation) config based

Currently, Pooler lacks the option to enable selective features, rendering it inflexible and unsuitable for creating truly dynamic snapshotting agents. In order to achieve this functionality, it is necessary to conduct a thorough cleanup of the config and make all features including (snapshotting, aggregation) configurable.

Related to #8

Make SystemEpochDetector more robust

Describe the bug

If EpochDetectorProcess detects an Epoch which is smaller than the last_processed_epoch (very rare but can happen if Epoch Generator crashes or something) then it exits with a GenericExit Signal (code-ref) which causes the entire snapshotting process to stop which should not happen.

Instead of halting everything, it should wait for EpochGenerator to catch up and wait for new epochs. It should only stop in situations where entire epoch config is messed up and waiting won't solve the problem.

To Reproduce

Affected versions: All

Steps to reproduce the behavior:

  1. Start Snapshotting
  2. Stop Local offchain consensus and restart with a lower epoch start block
  3. Snapshotting will halt

Expected behavior
Snapshotter should wait and try to recover from scenarios like this before exiting.

core-api /internal/snapshotter/status/{project_id} endpoint validation error

Describe the bug

Calling /internal/snapshotter/status/{project_id} for a given project can fail due to a validation error for the SnapshotterStatusReport model.

The SnapshotterReportState enum for SnapshotterStatusReport.state does not account for the ONLY_FINALIZED_SNAPSHOT_RECIEVED state that can be cached by audit-protocol.

To Reproduce

Affected versions: nms_master

Steps to reproduce the behavior:

  1. start a test node
  2. docker exec into the running redis instance: docker exec -it <redis-container> redis-cli
  3. add an example audit-protocol status report to redis:

HSET projectID:poolContract_total_supply:0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48:aavev3:snapshotterStatusReport 1 "{\"submittedSnapshotCid\":\"\",\"finalizedSnapshotCid\":\"abc\",\"state\":\"ONLY_FINALIZED_SNAPSHOT_RECIEVED\",\"reason\":\"OnlyfinalizedCIDreceived\"}"

  1. close redis or open another terminal and call the internal snapshotter status endpoint for the example project:

curl -L -X GET 'http://localhost:8002/internal/snapshotter/status/poolContract_total_supply:0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48:aavev3' \ -H 'Accept: application/json'

Expected behavior
The endpoint should return the project status report.

Proposed Solution
Add the "ONLY_FINALIZED_SNAPSHOT_RECIEVED" status to the SnapshotterReportState enum

Design data fixer functionality based on new event structure

One of the key challenges in snapshotting is the occurrence of missing data points.
To address this issue, snapshot agents must listen to Missing Data events from the smart contract and generate the missing data points to allow state builders to continue building protocol state and DAG chains.
Currently, Pooler lacks an effective mechanism to address missing data points, potentially leading to incomplete snapshots and inaccurate data representation. To address this issue, it is necessary to design and implement a functionality that leverages the Missing Data events described in issue #3 (PowerLoom/node-issue-report-collector#3), ensuring that all missing data points are accurately generated and integrated into the data structure.

Related to #8

Processor Distributor starting snapshot builds when preloaders fail due to missing trie node errors

Describe the bug

The addition of missing trie node exception handling in the RPC util has introduced a case where the preloader tasks will be flagged as complete without returning/caching any data due to the suppression of the missing trie node errors. When a missing trie node error occurs, the rpc util does not raise any exceptions and will return an empty list/dict depending on the request type. _preloader_waiter in processor_distributor checks if any preloaders returned an exception, and it will flag the failed data retrieval as successful since none were raised.

To Reproduce

Affected versions: fix/rpc_trie_node_err

Expected behavior
Projects should not be distributed for processing when a preloader that they depend on fails to gather data. In the case of a missing trie node, any subsequent calls will also fail.

Proposed Solution
Preloaders should check that they have actually retrieved and cached data before finishing. They should raise an exception in the case where data retrieval has failed.

Setup BaseV3 master node

  • update basev3 configs for POP and latest pairs list

  • update basev3 to use direct submissions

  • deploy node and setup tx signer refill/pooler health check

  • setup frontend and dns

Upgrade web3 py to latest v7 or at least v6

Is your feature request related to a problem?
Currently pooler uses a rather outdated version of web3py, v5. The async functionalities in this are not so feature complete with non-standard initialization along with fuzzy APIs to sign and send transactions.

Also, it makes it difficult to incorporate improved functionalities from other codebases, for eg relayer, that are written in v6 or v7 since there are breaking changes between v5 and the latter.

Describe the solution you'd like
Review all usages of web3py and upgrade them to the latest version after a feasibility check.

Describe alternatives you've considered
NA

Additional context
Check web3 py docs https://web3py.readthedocs.io/

uniswapv3 tvl and tick data corrections

Describe the bug

The current uniswapv3 implementation does not compute the correct "total value locked" for liquidity pools in pooler.modules.uniswapv3.total_value_locked.py's caclulate_reserves() function. Additionally, calls to the UniswapV3 helper contract's getTicks() function are very expensive, and they will currently be unnecessarily duplicated if called in multiple processes.

To Reproduce

To reproduce, run the pooler/tests/test_tvl.py test. This will fail due to inaccurate computed reserves.

Expected behavior
test_tvl.py should pass, indicating that the tvl calculation is within accepted accuracy.

Proposed Solution
The current calculate_reserves() function retrieves and calculates duplicate ticks. I will change it to get each tick only once.
Calculate_reserves() also does not properly combine batched tick retrieval calls, leaving half of the retrieved ticks unprocessed. I will fix the tick processing to include every initialized tick in the pool.

Additional context
issues brought to my attention by @getjiggy

Core-api /internal/snapshotter/status endpoint does not return the correct status report

Describe the bug

Calling the /internal/snapshotter/status endpoint always returns:

{"totalSuccessfulSubmissions":0,"totalIncorrectSubmissions":0,"totalMissedSubmissions":0,"projects":[]}

The individual projects' submission statuses are correctly saved in redis, but there is an issue with retrieval.

To Reproduce

Affected versions: all

Steps to reproduce the behavior:

  1. call /internal/snapshotter/status on any running node

Expected behavior
The endpoint should return return the total successful, incorrect, and missed snapshots for all projects

Proposed Solution
The redis data request in utils.data_utils.get_snapshotter_status should use the destructured projects list.

Additional context
Using python 3.10.13, redis:alpine 7.2.4 docker image, and redis 4.6.0 python library

System event detector polling interval

Describe the bug

System event detector will crash if it's last_processed_block for the anchor chain is equal to the anchor chain's current_block

To Reproduce

Affected versions: nms_master branch

Steps to reproduce the behavior:

  1. Reduce the rpc polling interval to less than the anchor chain's block time

Expected behavior
System event detector should sleep for the polling interval

Proposed Solution
change all references of settings.anchor_chain with settings.rpc

Caveats
Describe any suspected impact/assumptions for the proposed solution.

Additional context
Add any other context about the problem here. (e.g. OS, Python version, ...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.