Giter Club home page Giter Club logo

squid-sdk's Introduction

Squid SDK - an ETL framework for Web3 data

Subsquid SDK is a TypeScript ETL toolkit for blockchain data, that currently supports

  • Ethereum and everything Ethereum-like
  • Substrate-based chains
  • Solana.

Subsquid SDK stands apart from the competition by

  • Being a toolkit (rather than an indexing app like TheGraph or Ponder)
  • Fast binary data codecs and type-safe access to decoded data
  • Native support for sourcing the data from Subsquid Network.

The latter is a key point, as Subsquid Network is a decentralized data lake and query engine, that allows to granularly select and stream subset of block data to lightweight clients while providing game changing performance over traditional RPC API.

Getting started

The best way to get started is to install squid CLI and scaffold a squid project with sqd init.

For step-by-step instructions, follow one of the Quickstart guides.

Developer community

Our developers are active on Telegram and Discord. Feel free to join and ask any question!

Contributing

Subsquid is an OpenSource project, contributions are welcomed, encouraged and will be rewarded!

Please consult CONTRIBUTING.md for hacking instructions and make sure to read our code of conduct.

squid-sdk's People

Contributors

abernatskiy avatar acdibble avatar andrew-frank avatar belopash avatar boo-0x avatar dariaag avatar dlangellotti avatar dzhelezov avatar eldargab avatar hyunggyujang avatar igorgro avatar justraman avatar manutopik avatar mo4islona avatar moliholy avatar octo-gone avatar omahs avatar osipov-mit avatar ozgrakkurt avatar raekwoniii avatar rightjelkin avatar tmcgroul avatar vanruch avatar vikiival avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

squid-sdk's Issues

Archive missing blocks

I had this syncing issue multiple times where the archive misses to index a few blocks. But if we check in the docker logs there are logs saying the block was saved still they are missing in the database thus we start to get a few issues with the processor since there are missing blocks/extrinsics/events.

This also requires to resync everything which is very slow. I would like to suggest to make some kind of "repair" command where it would check for missing blocks.

Not sure this is still relevant for the new archive implementation. Maybe it is not.

image

squid-evm-typegen 1.3.1 fails with "TypeError: abi.map is not a function"

Attempted to use the tool with two smart contracts (0x7d2768dE32b0b80b7a3454c06BdAc94A69DDc7A9, the AAVEv2 pool, and 0xBB9bc244D798123fDe783fCc1C72d3Bb8C189413 from this example), fails with

evm-typegen error: TypeError: abi.map is not a function

To Reproduce
Steps to reproduce the behavior:

  1. Make a test directory and cd to it
  2. Run npm install @subsquid/evm-typegen
  3. Fetch the abi with
$ curl "https://api.etherscan.io/api?module=contract&action=getabi&address=0xBB9bc244D798123fDe783fCc1C72d3Bb8C189413" --output abi.json
  1. Run npx squid-evm-typegen --abi abi.json --output abi.ts
  2. See the error. Note that abi.ts is not generated.

Expected behavior
Successful generation of abi.ts

Environment (please complete the following information):

  • node.js version 18.12.1
  • npm version 8.19.2
  • OS version - Gentoo rolling release

Additional context
Retrieved abi.json

`The Graph` like entity CREATE, UPDATE & DELETE notifications

I would like to ask what plans exist, or could exist, for accommodating the following in Subsquid, with some plausible timeline:

The Graph like entity CREATE, UPDATE & DELETE notifications

For each entity type, that there was a subscription that allowed the client to detect the three events CREATE, UPDATE and DELETE, with suitable parameter values included. This is already in The Graph, and would be very useful. It would also be very useful if these subscriptions allowed for some input parameters that defined some filtering conditions, so the client could narrow down the events of interest.

How are chain forks handled in subsquid for substrate?

Hi, not sure if this is the correct place to ask this questions, but I didn't find a discord community or telegram channel for subsquid, I'm starting to evaluate indexing solutions for substrate chains and I was looking into subsquid and didn't find an answer to 2 technical questions that I would like to clarify before picking a solution:
1.- How are chain forks handled?
2.- How are structural data changes overtime handled (runtime upgrades)?

Thanks

How to juggle chain endpoints

Our upcoming archive implementation is able to use several RPC connections (possibly from multiple providers) to ingest chain data.

Currently for each available connection we create a separate ingestion loop, which at each iteration simply picks "the next unpicked range of blocks to fetch".

RPC connections may have uneven performance, which generally changes over-time.

We need to investigate algorithms which would allow to minimise situations when
data processing might be stalled due to slow connection.

Such algorithm should monitor the ingestion performance of each connection (e.g. how long data processing waits for particular connection to complete fetching of the next block range) and return a timeout which should pass before a particular connection is allowed to pick the next work item.

We are looking for something easy and simple, albeit principled and used in a wild.

"ROLLBACK" query got stuck

Describe the bug
At the back of #108, I found another issue where the process got stuck here

To Reproduce
Steps to reproduce the behavior:

  1. Follow the step as of #108

Expected behavior
Wait for "ROLLBACK" query, if it's longer than X time, ignore

New format of Squid metadata

This is a proposal of a Squid metadata template specification. Squid Metadata reflects how the Squid is meant to be deployed and what it actually does. At the same time it is agnostic with respect to what it does. Exclamation mark (!) means the field is mandatory, (?) means it's optional

squid: {
   version!: "v1", // version of the Squid API reflected in the API endpoint
   slug!: "balances", // API endpoint will be of the form .../<slug>/<version>
   title!: "DotSama Balances", // Squid title as shown in public explorers, e.g. Aquarium
   description?: "Sample Squid collecting balances from parachains", // Squid description as shown in public expolorers, e.g. Aquarium
   tags?: [ "DeFi", "Balances", "Wallet" ], // Tags for search squids agains keywords
   palletes?: [ // Pallets to search squids against keywords 
       "balances"
   ],
   logo?: "assets/logo.png", // must be relative to the squid root folder
   deploy!: {
      init!: "<init command, e.g. migrations>", 
      api!: "<gql server command>",
      store?: [{ 
          kind: "postgres", // only postgres is supported for now
          provision: "auto", // db is provided by the deployment. only "auto" is supported for now
          limits?: {   // resource limits for the store
             size: 100Gb // default is 100Gb
          } 
      }],
      secrets?: [ // list of secrets that has to be provided by the deployment environment. Deployment should fail is no variable is available
          "MY_SECRET_VARIABLE", 
      ],
      processors!: [
          {
             network!: "kusama", // same as in spec
             genesis!: "0x2abcdef", // genesis block of the network
             name!: "kusama-balances",
             limits?:  {
                CPU:  2, // number of cores
                RAM: 1G
             },
             run!: "<run command>"   
         },
         {
             network!: "polkadot",
             genesis!: "0xabcde",  // genesis block
             name!: "polkadot-balances",
             run!: "<run command>",
             limits?:  {
                CPU:  2, // number of cores
                RAM: 1G
             },   
         },
      ]
  }
}

Unexpected behavior of queries that include `OR`

To Reproduce
Steps to reproduce the behavior:

  1. Go to http://194.233.167.225:4350/graphql
  2. Paste the query:
{
  channels(where: {
    id_in: ["1", "2", "3"],
    OR: [
      { title_contains: "test" },
      { description_contains: "test"  }
    ]
  }) {
    id
  }
}
  1. Hit the "Execute" button
  2. Notice results including channels other than 1, 2 and 3

Expected behavior
The OR condition is joined with other conditions using the default operator (AND)

Expected difference in SQL:

SELECT "channel"."id" AS _c0 FROM "channel" AS "channel"
-  WHERE (("channel"."id" IN ($1, $2, $3)) OR (position($4 in "channel"."title") > 0) OR (position($5 in "channel"."description") > 0))
+  WHERE (("channel"."id" IN ($1, $2, $3)) AND ((position($4 in "channel"."title") > 0) OR (position($5 in "channel"."description") > 0)))

Environment (please complete the following information):

  • @subsquid/graphql-server: 3.2.3

`substrate-ingest` failed to process the block with `tip` as object type

Describe the bug
This line of code throws error "Do not know how to serialize a BigInt" which originate from pg package. Turns out because the tip value is an object type {tip: 0n, feeExchange: undefined} instead of 0n

TypeError: Do not know how to serialize a BigInt
    at JSON.stringify (<anonymous>)
    at prepareObject (/Users/ken/relayer-services/node_modules/pg/lib/utils.js:85:19)
    at prepareValue (/Users/ken/relayer-services/node_modules/pg/lib/utils.js:66:12)
    at arrayString (/Users/ken/relayer-services/node_modules/pg/lib/utils.js:29:31)
    at prepareValue (/Users/ken/relayer-services/node_modules/pg/lib/utils.js:63:12)
    at prepareValueWrapper (/Users/ken/relayer-services/node_modules/pg/lib/utils.js:193:12)
    at writeValues (/Users/ken/relayer-services/node_modules/pg-protocol/dist/serializer.js:67:41)
    at Object.bind (/Users/ken/relayer-services/node_modules/pg-protocol/dist/serializer.js:98:5)
    at Connection.bind (/Users/ken/relayer-services/node_modules/pg/lib/connection.js:160:26)
    at Query.prepare (/Users/ken/relayer-services/node_modules/pg/lib/query.js:211:18

To Reproduce
Steps to reproduce the behavior:

  1. Run substrate-ingest against our testnet
yarn squid-substrate-ingest -e wss://nikau.centrality.me/public/ws --start-block 5001560 --out postgres://postgres:postgres@localhost:5432/nikau-archive --write-batch-size 1
  1. The process will hang, to get the actual error, wrap this block inside a try...catch

Expected behavior

  • A warning about tip value is not in an expected type, but continue on rather getting stuck

Environment (please complete the following information):

  • @subsquid/substrate-ingest: v1.1.1
  • node.js: v18.2.0
  • npm: v8.9.0
  • OS: Mac v12.4

Applicable to decoding issues:

Fix github actions for multi-arch Docker builds

More and more people are using M1 Macs, but some x86-64 images do not work on those.

This creates major inconvenience for our M1 users which want to use local indexer, as they have to build gateway image themselves.

Multi-arch docker builds are the way to go - https://docs.docker.com/desktop/multi-arch/.

We need to modify our GitHub docker-build action to produce multi-arch images for x86-64 and ARM.

How does squid handle runtime upgrades

What is the expected upgrade path for runtime changes? We recently encountered the error described here because metadata is only decoded on startup. Our naive solution to provide earlier insight is to subscribe to the enactment event and exit the process - allowing an external moderator (such as Kubernetes) to flag the failure immediately. Provided we expect the breaking changes in advance, is there any way to schedule the upgrade automatically?

Implement SQLite Store for squid-processors

We currently have a default (TypeormDatabase) implementation of the processor store.
For non-API cases having a Postgres as a backend database is an overkill and a more lightweight SQLite may be a better choice:

  • It's transactional
  • Lightweight: It doesn't require an additional service to run
  • Support in-memory
  • Inserts can be very fast

The limitation of SQLite is that it supports only a single writer, but that's exactly the setup we have (only the squid processor writes).
SQLite may also be a good choice for API-less pipelines with non-transactional data sinks.

Acceptance criteria:

  • SQLLite implementation of the Store interface (probably only very minor modifications are needed for TypeormDatabase)
  • Document the limitations of SQLLite store and intended use-cases
  • A sample squid implementation with a SQLite store + tests

An utility lib to convert `polkadot.js` types bundle format to subsquid

There's a big pain for new users to use custom types bundle due to the different file format we use compared to polkadot.js.
To alleviate this issue, let's make an utility lib that will convert polkadot.js bundle files to the subsquid format. We can then reuse this lib so that both the ingester and the typegen will be able to accept polkadot.js-style bundle files.

Technical decisions to make before GA

Below is a list of technical decisions to make before GA.

  • Meaning of *_every filters.
  • Need for prefixes for standard prometheus node_* metrics.
  • Id-less entities

*_every filters

Graphql server provides *_every filters:

query { 
  # select users who have only high profile friends
  users(where: {friends_every: {rating_gt: 100}}) {
    id
    name
  }
}

The question is what to do when user doesn't have friends.

Include user in result set

Pros:

  • In all programming languages [].every(() => false) == true
  • The opposite behaviour is easy to achieve with where: {friends_every: {rating_gt: 100}, {friends_some: {}}}
  • Openreader (hydra v5) already behaves like this.

Cons:

  • Many people find such behaviour confusing
  • Hydra v4 behaves in opposite way

Do not include user in result set

Pros:

  • See cons of the opposite decision

Cons:

  • Will be implemented as a combination of "classical every" and "some" check
  • The opposite behaviour is a bit hard to achieve (where: {_or: [{friends_every: {rating_gt: 100}}, {friends_none: {}}]})

From pure convenience point of view, v4 behaviour will probably be more frequently desired, but that's not entirely clear.
For example, the above query will most likely benefit from v4, but the query below from v5

query {
  # find all accounts which where never involved in significant money transfers
  accounts(where: {transfers_every: {balance_lt: 10}}) {
    id
  }
}

Name prefixes for standard metrics

Processor runs prometheus endpoint and exposes a number of metrics including standard node_* set.
In hydra all standard metrics had hydra_processor_ prefix, but now they are exposed as is.
Someone familiar with prometheus should tell which way is better.

UPD: After reading a bit of prometheus docs, it seems, that prefix-less metrics is the way to go.

Id-less entities

For some tables you never want to reference individual raws. For such cases you don't want to bother with id assignment and allocate extra useless column in database, yet our framework requires explicit id assignment for all entities.

Currently it is possible in schema.graphql to omit id field in entity type, but framework will still create it implicitly.
We can change the behaviour and create id-less tables in such cases.

If we accept the above idea, the next question will be about necessity of composite primary keys for such tables.

We are planning to support multi-field indexes via entity level index annotations:

type Foo @entity @index(columns: ["foo", "bar"]) {
  foo: Int!
  bar: Int!
}

Primary key could be specified like this:

type Foo @entity @index(columns: ["foo", "bar"], unique: true) {
  foo: Int!
  bar: Int!
}

From database point of view primary keys are not required and it would be convenient to create primary-key-less tables for id-less entities, however TypeORM requires primary key.

We can do something like this:

@Entity()
export class Rec {
    constructor(props?: Partial<Rec>) {
        Object.assign(this, props)
    }

    @PrimaryGeneratedColumn('rowid')
    private _id!: number

    @Column()
    x!: number

    @Column()
    y!: number
}

but this will still expose non-deterministic column on a database level, which is not ideal for applications which connect to database directly.

Ingest fails on 🐍 block

Ingest refuses to process this block https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fbasilisk-rpc.dwellir.com#/explorer/query/0x6e27fe16eea91adfa509401cd2285714000af33c4b85741d2aafece45e1bce7e on Basilisk

with exception:

AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
              (0, assert_1.default)(val != null, msg)
at assertNotNull (/squid/util/util-internal/lib/misc.js:18:26)
at CallParser.next (/squid/substrate-ingest/lib/parse/call.js:324:50)
at CallParser.find (/squid/substrate-ingest/lib/parse/call.js:300:30)
at CallParser.visitWrapper (/squid/substrate-ingest/lib/parse/call.js:271:27)
at CallParser.visitCall (/squid/substrate-ingest/lib/parse/call.js:183:22)
at CallParser.visitBatchItems (/squid/substrate-ingest/lib/parse/call.js:242:22)
at CallParser.visitBatchAll (/squid/substrate-ingest/lib/parse/call.js:208:14)
at CallParser.visitCall (/squid/substrate-ingest/lib/parse/call.js:168:22)
at CallParser.visitExtrinsic (/squid/substrate-ingest/lib/parse/call.js:134:22)
at new CallParser (/squid/substrate-ingest/lib/parse/call.js:124:18)

version:

subsquid/substrate-ingest:firesquid
sha256:2a7d82afbcddf394c87de1b456c76a1f465f571e2b646be0d8b58470229f5ba9

complete error:

{"level":5,"time":1657813904006,"ns":"sqd:substrate-ingest","err":{"generatedMessage":true,"code":"ERR_ASSERTION","actual":false,"expected":true,"operator":"==","blockHeight":1471534,"blockHash":"0x6e27fe16eea91adfa509401cd2285714000af33c4b85741d2aafece45e1bce7e","stack":"AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:\n\n  (0, assert_1.default)(val != null, msg)\n\n    at assertNotNull (/squid/util/util-internal/lib/misc.js:18:26)\n    at CallParser.next (/squid/substrate-ingest/lib/parse/call.js:324:50)\n    at CallParser.find (/squid/substrate-ingest/lib/parse/call.js:300:30)\n    at CallParser.visitWrapper (/squid/substrate-ingest/lib/parse/call.js:271:27)\n    at CallParser.visitCall (/squid/substrate-ingest/lib/parse/call.js:183:22)\n    at CallParser.visitBatchItems (/squid/substrate-ingest/lib/parse/call.js:242:22)\n    at CallParser.visitBatchAll (/squid/substrate-ingest/lib/parse/call.js:208:14)\n    at CallParser.visitCall (/squid/substrate-ingest/lib/parse/call.js:168:22)\n    at CallParser.visitExtrinsic (/squid/substrate-ingest/lib/parse/call.js:134:22)\n    at new CallParser (/squid/substrate-ingest/lib/parse/call.js:124:18)"}}

Metadata not inserts when start ingest from random block.

Hello.

If you start ingest from a non-zero block, for example 11270865, the metadata table will remain empty. Which leads to an error when starting squid.
If you start ingest from the block before changing the metadata, for example 10879370, then the metadata is added to the table.

Please make a check for the presence of the metadata of the first block in the database when starting the ingest, if there is no metadata, then you need to add it.

Add auto-reconnect to squid processor

The processor stops with an error if it fails to connect to the archive. Let's introduce retries with exponential back-offs, so that it keeps trying to connect to the archive even if it is temporarily unavailable.

Inquiry about decoding errors in event indexing

Describe
Hello,

I'm currently working on an event indexing project that uses a given ABI file to filter topics and decode results. However, since there is no fixed contract address, the input parameters may not always satisfy the decoding requirements, resulting in a panic during decoding.

I was wondering if there is a try_decode-like method available that could return null or an error when there are issues with decoding, such as incorrect length or failure to decode the event correctly. This would enable me to have more flexibility in handling these situations.

Could the development team provide any suggestions or advice for addressing this issue?

Thank you!

relation "migrations" does not exist

I am trying to docker compose up the substrate archiver with a remote supabase postgre db, and a local moonriver full node

I only have one service

services:
  ingest:
    restart: on-failure
    image: subsquid/substrate-ingest:firesquid
    command: [
       "-e", "ws://host.docker.internal:9944",
       "-c", "10",
       "--prom-port", "9090",
       "--out", "postgres://postgres:[email protected]:6543/postgres"
    ]
    ports:
      - "9090:9090"

Here is the full error. Maybe some extra config is missing from the setup guide at
https://github.com/subsquid/substrate-archive-setup/issues

{"level":5,"time":1681318981534,"ns":"sqd:substrate-ingest","err":{"cause":{"length":109,"name":"error","severity":"ERROR","code":"42P01","position":"15","file":"parse_relation.c","line":"1392","routine":"parserOpenTable","stack":"error: relation \"migrations\" does not exist\n at Parser.parseErrorMessage (/squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/pg-protocol/dist/parser.js:287:98)\n at Parser.handlePacket (/squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/pg-protocol/dist/parser.js:126:29)\n at Parser.parse (/squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/pg-protocol/dist/parser.js:39:38)\n at Socket.<anonymous> (/squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/pg-protocol/dist/index.js:11:42)\n at Socket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:315:12)\n at readableAddChunk (node:internal/streams/readable:289:9)\n at Socket.Readable.push (node:internal/streams/readable:228:10)\n at TCP.onStreamRead (node:internal/stream_base_commons:190:23)"},"stack":"Error: Migration failed. Reason: relation \"migrations\" does not exist\n at /squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/postgres-migrations/dist/migrate.js:100:27\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async /squid/common/temp/node_modules/.pnpm/[email protected]/node_modules/postgres-migrations/dist/with-lock.js:25:28\n at async /squid/substrate/substrate-ingest/lib/main.js:109:13"}}

Add validatorId to BlockContext

ValidatorId (that is, the account who authored the block) is not accessible in the handler context. It should be fetched from the archive and added to the context

Metadata decoding failed due to weight change

Describe the bug
Our process lost connection and failed to restart due to the weights v2 Substrate change.

Expected behavior
Metadata decoding should be dynamic so this should not have caused the process to exit.

Environment (please complete the following information):

Applicable to decoding issues:

  • Running v.1.19.0 on Kintsugi (doesn't include polkadot-v0.9.29)
  • Upgrade scheduled at block 15063893 on Kusama (v1.19.1 - includes weight change)
  • Upgrade applied at block 1758928 on Kintsugi
  • Squid loses connection at 1820147, decoding metadata fails on restart here

Additional context
The weight type was fixed in the latest scale-codec package but I am aware this type is expected to change again so I am hoping that this doesn't require any additional manual changes.

Expose GraphQL types generated from the schema

Based on a few requests, it would be convenient to explicitly expose the GraphQL types generated from the schema (similar to TypeORM types). Otherwise, one has to manually copy them to be used in custom resolvers.

FireSquid Ingest: Crashes on failed extrinsics

Hi,

I'm running the newest FireSquid archive and batch processor for Rococo-contracts and noticed that the processor crashes when it tries to process a failed extrinsic.

This is the error that I'm getting:

10:43:53 FATAL sqd:processor AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:

                               (0, assert_1.default)(val != null, msg)

                                 at assertNotNull (/home/xueying/dev/squid-ink/node_modules/@subsquid/util-internal/lib/misc.js:18:26)
                                 at tryMapGatewayBlock (/home/xueying/dev/squid-ink/node_modules/@subsquid/substrate-processor/lib/ingest.js:292:64)
                                 at mapGatewayBlock (/home/xueying/dev/squid-ink/node_modules/@subsquid/substrate-processor/lib/ingest.js:227:16)
                                 at Array.map (<anonymous>)
                                 at /home/xueying/dev/squid-ink/node_modules/@subsquid/substrate-processor/lib/ingest.js:60:45
                                 at processTicksAndRejections (node:internal/process/task_queues:96:5)
                             err:
                               generatedMessage: true
                               code: ERR_ASSERTION
                               actual: false
                               expected: true
                               operator: ==
                               blockHeight: 47346
                               blockHash: 0xa0902b864029f88d5f379a34654c2ad370c4ad8986fc52153b1fe08f802eef0c
                               batchRange:
                                 from: 0
                               archiveHeight: 477419
                               archiveQuery: query {
                                               status {
                                                 head
                                               }
                                               batch(fromBlock: 0, toBlock: 477419, limit: 500, includeAllBlocks: false, events: [{name: "System.NewAccount"}, {name: "Balances.Reserved"}, {name: "Balances.Transfer"}, {name: "Balances.Withdraw"}, {name: "Contracts.CodeStored"}, {name: "Contracts.ContractCodeUpdated"}, {name: "Balances.Endowed"}, {name: "Contracts.Instantiated"}, {name: "Contracts.ContractEmitted"}], calls: [{name: "Contracts.call"}]) {
                                                 header {
                                                   id
                                                   height
                                                   hash
                                                   parentHash
                                                   timestamp
                                                   specId
                                                 }
                                                 events
                                                 calls
                                                 extrinsics
                                               }
                                             }

                               batchBlocksFetched: 500

When I query my archive API, I see that at the erroneous block there is a failed extrinsic and no calls but the extrinsic does contain a callId:

  "batch": [
    {
      "header": {
        "id": "0000047346-a0902",
        "height": 47346,
        "hash": "0xa0902b864029f88d5f379a34654c2ad370c4ad8986fc52153b1fe08f802eef0c",
        "parentHash": "0x77370cc8389791254c41c9ef9c4b81f57003e9bc697fdb94dfd6d36dfd95570b",
        "timestamp": "2022-03-31T13:39:48.087+00:00",
        "specId": "canvas-kusama@16"
      },
      "calls": [],
      "extrinsics": [
        {
          "callId": "0000047346-000002-a0902",
          "error": {
            "__kind": "BadOrigin"
          },
          "fee": null,
          "hash": "0x6a0cac5df13038fc5bf2f36cc71ca62d4be54bb94325008e7898b34db5cd8532",
          "id": "0000047346-000002-a0902",
          "indexInBlock": 2,
          "pos": 10,
          "signature": {
            "address": {
              "__kind": "Id",
              "value": "0x2681a28014e7d3a5bfb32a003b3571f53c408acbc28d351d6bf58f5028c4ef14"
            },
            "signature": {
              "__kind": "Sr25519",
              "value": "0xba4d4de7ed277881786dfefb9275bd05aee8be4f626e115ea181ec41bab6ab6dfcb63318ed6d1c4685decc846f76476ba09f1cab84cda07b5e001a2813ffb682"
            },
            "signedExtensions": {
              "ChargeTransactionPayment": 0,
              "CheckMortality": {
                "__kind": "Mortal228",
                "value": 0
              },
              "CheckNonce": 0
            }
          },
          "success": false,
          "tip": "0",
          "version": 4
        }
      ]
    }

This is causing the processor to crash since in tryMapGatewayBlock of ingest.ts (lines 357-362) the processor is asserting that call is not null if callId exists. I imagine that the archive should be setting the callId to null in case of failed extrinsics?

I also encountered the same issue in Shibuya at block 1010459 using the Shibuya archive endpoint, https://shibuya.archive.subsquid.io/graphql

qemu error when running indexer-gateway on Apple Silicon

Running subsquid/hydra-indexer-gateway:5 on M1 MacBook Pro

Expected behavior

Ran docker-compose up. Expected localhost:4010 to start up

Actual behavior

Got the following log / error:

found metadata database
{"timestamp":"2022-03-06T00:50:54.000+0000","level":"info","type":"startup","detail":{"kind":"migrations-startup","info":"migrations server port env var is not set, defaulting to 9691"}}
{"timestamp":"2022-03-06T00:50:54.000+0000","level":"info","type":"startup","detail":{"kind":"migrations-startup","info":"server timeout is not set, defaulting to 30 seconds"}}
{"timestamp":"2022-03-06T00:50:54.000+0000","level":"info","type":"startup","detail":{"kind":"migrations-startup","info":"starting graphql engine temporarily on port 9691"}}
{"timestamp":"2022-03-06T00:50:54.000+0000","level":"info","type":"startup","detail":{"kind":"migrations-startup","info":"waiting 30 for 9691 to be ready"}}
{"timestamp":"2022-03-06T00:51:26.000+0000","level":"info","type":"startup","detail":{"kind":"migrations-startup","info":"failed waiting for 9691, try increasing HASURA_GRAPHQL_MIGRATIONS_SERVER_TIMEOUT (default: 30)"}}
metadata hash: 87816a67260b2ab0b18da3fae9a8ec0a
metadata db: gateway_metadata_87816a67260b2ab0b18da3fae9a8ec0a
waiting until indexer db is ready
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

Information

According to docker/for-mac#5123 (comment), an image which supports mac m1 chips can probably fix this.

Support for ink! v4 metadata decoding

Hi,

We're currently working on adding decoded ink! contract events to our Substrate contracts explorer. The idea is that users will be able to upload source code or metadata to our ink-verifier-server and we use the generated/stored metadata in our squid to decode contract events. Since we are mostly dealing with ink! v4 contracts, we'd like to know if @subsquid/ink-abi will support ink! v4 soon? I've recently opened an issue with the ink! team to check on ink-v4-schema updates. You can find the related issue here: use-ink/ink#1564

Incorrect `EthAccessListItem` type definition

In our typedefs:

  "EthAccessListItem": {
    "address": "EthAddress",
    "slots": "Vec<H256>"
  },

In v14 metadata

  "EthAccessListItem": {
    "address": "EthAddress",
    "storageKeys": "Vec<H256>"
  },

Purpose of Hasura

Hi!
We are currently rebuilding a French blockchain with Substrate, and I would like to use squid to expose a custom graphql API on top of it. I'm new to substrate and just discovered squid, so sorry if my issue looks naïve. However, I've been working with Hasura since tree years, so I'm really enjoyed to see it in your stack!

I read in your doc :

Subsquid is rapidly deploying Archives to gather data from available blockchains on behalf of developers who wish to use this data to develop DApps.

So, I don't understand why Hasura is on the top of the indexer (Archive)?
The main purpose of Hasura is to build App fast. I need actions, schema remote stitching, events/cron and row/column level permission provided by Hasura to build my awesome DApp. Why Hasura is on the indexer and not a part of the squid api?

Also, we don't need an ORM with Hasura. Hasura is an ORM on top of database!
Why using typeorm in squid to build the graphql api when Hasura could do that out of the box?
Hasura can also build a graphql endpoint on top of many databases. Why not connecting directly the squid database in Hasura? squid/graphql-server is useless!
I certainly don't understand well the squid stack, but it seems you are missing out many killer features of Hasura...

Anyway, for now, I used "remote schema" to merge the squid api in Hasura. By this way, I don't have to go to the squid graphiql at port 4350. I just go to Hasura at localhost:4010/console and I got the indexer and squid api merged together. By this way, we can imagine many squid running as microservices, and got one endpoint to fetch them all...

image

ink! smart contract without events should have any type

Describe the bug

I have a smart contract that does not emit any events

the problem is when I do npx squid-ink-typegen (+ args ofc)

it generates content like:

export function decodeEvent(hex: string): Event {
    return _abi.decodeEvent(hex)
}

export type Event = never

Now it it impossible to build the squid

src/abi/marketplace.ts:1412:5 - error TS2322: Type 'any' is not assignable to type 'never'.

1412     return _abi.decodeEvent(hex)
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Found 1 error in src/abi/marketplace.ts:1412
  1. FIX is to return any if there is no event present
export type Event = any

or
2. do not generate decodeEvent function

Decide on many-to-many representation in schema.graphql

Many-to-many relations in the graph and hydra v4 supposed to be defined as follows:

type Organization @entity {
  id: ID!
  name: String!
  members: [User!]! @derivedFrom(field: "organization")
}

type User @entity {
  id: ID!
  name: String!
  organizations: [Organization!]! 
}

However, this seems to be very problematic.

First, organizations: [Organization!]! property can be easily confused with a real array of organization ids.
Such implementation of many-to-many relation makes perfect sense and very useful for some cases.
Although, we don't plan to support this in near future, it would be nice to reserve the possibility.

Second, the above scheme is needlessly obscure. It hides the underlying join table, creating possibility for name conflict,
also join table generation algorithm is not clear.

Third, it makes impossible to add additional fields to the join table.

In summary:

Pros:

  • compatibility with the graph and hydra v4

Cons:

  • breaks the nice rule when all virtual lookup fields have @deriveFrom
  • obscure
  • makes arrays of ids ugly to express
  • makes addition of extra fields to the join table ugly to express
  • makes impossible to have multiple kinds of relations involving same entities.

Suggestion

To define many-to-many relation one have to define join table explicitly:

type UserOrganization @entity {
  user: User!
  organization: Organization!
}

type Organization @entity {
  id: ID!
  name: String!
  members: [User!]! @derivedFrom(field: "organization", joinTable: "UserOrganization")
}

type User @entity {
  id: ID!
  name: String!
  organizations: [Organization!]! @derivedFrom(field: "member", joinTable: "UserOrganization")
}

We can decide on rules for primary keys of join tables and other details after we conclude that this is a general way to go.

The `event.evmtxhash` of moonbase network is `null`

Describe the bug
When testing moonbase, I found that the evmTxHash in the block log message was null.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new SubstrateBatchProcessor
  2. setDataSource: setDataSource({
    chain: 'wss://wss.api.moonbase.moonbeam.network',
    archive: lookupArchive('moonbase', { release: "FireSquid" })
    })
  3. filter block items (item.kind === 'event' && item.name === 'EVM.Log')
  4. console.log(item.event.evmTxHash)

Expected behavior
evmTxHash should be correct tx hash

Screenshots
image

Environment (please complete the following information):

  • specify the version of the core packages (@subsquid/cli, @subsquid/substrate-processor, ...)
  • node.js version
  • npm version
  • OS version
  • Reproducible example or a repo link

Applicable to decoding issues:

  • chain name: moonbase
  • typesBundle (if it's not built-in)
    optional:
  • block: 2828726
  • extrinsic
  • call
  • event
  • spec version
  • endpoint

Additional context
Add any other context about the problem here.

Missing typorm Index import when only using multi-column indices

Describe the bug
Codegen does not include typeorm's Index as Index_ when only using multi-column indices.

To Reproduce
Steps to reproduce the behavior:

  1. Clone the squid-substrate-template repository
  2. npm run update && npm ci
  3. Edit the schema.graphql to only include the following lines
type AliceBob @entity @index(fields: ["alice", "bob"]) {
  alice: String!
  bob: String!
}
  1. Run sqd codegen
  2. See that the src/model/generated/aliceBob.model.ts uses the unreferenced @Index_ decorator
import {Entity as Entity_, Column as Column_, PrimaryColumn as PrimaryColumn_} from "typeorm"

@Index_(["alice", "bob"], {unique: false})
@Entity_()
export class AliceBob {
...

Expected behavior
The Index as Index_ import should be added to the generated model.

import {Entity as Entity_, Column as Column_, PrimaryColumn as PrimaryColumn_, Index as Index_} from "typeorm"

@Index_(["alice", "bob"], {unique: false})
@Entity_()
export class AliceBob {
...

Environment (please complete the following information):

  • @subsquid/cli (2.1.0)
  • node.js v16.16.0
  • npm 9.4.1
  • OS Ubuntu 20.04.5 LTS, 5.10.102.1-microsoft-standard-WSL2

Decode extrinsic error at ingestion

Hi guys,

For the moment an extrinsic module error is coming in the form

{
  "__kind": "Module",
  "value": {
    "error": "0x06000000",
    "index": 9
  }
}

Unfortunately, this is not very helpful when we try to show it in the UI of an explorer. It would be nice to have it in the decoded form like what @polkadotjs/api api.registry.findMetaError returns:

{
  args: [],
  docs: [ 'Contract trapped during execution.' ],
  fields: Type(0) [ registry: TypeRegistry {}, initialU8aLength: 1 ],
  index: 11,
  method: 'ContractTrapped',
  name: 'ContractTrapped',
  section: 'contracts'
}

Is this something you guys plan on supporting?

Objects containing Enum fields cause build failures

Consider a schema such as:

enum MyEnum {
  A
  B
}

type MyObject {
  enumField: MyEnum!
}

codegen will then generate a src/model/generated/_myObject.ts file, containing a constructor that allows deserializing from JSON using marshal. However, it treats the enum field as a string, causing the typescript build to fail (with the error Type 'string' is not assignable to type 'MyEnum'.):

  constructor(props?: Partial<Omit<MyObject, 'toJSON'>>, json?: any) {
    Object.assign(this, props)
    if (json != null) {
      this._enumField = marshal.string.fromJSON(json.enumField)
    }
  }

I believe the fix would be to generate a type cast for the enum type, e.g.

      this._enumField = marshal.string.fromJSON(json.enumField) as MyEnum

Roadmap Q1 2023

Core

Feature Status Release Target Comments
EVM chains DONE Q4
Squid store for s3, csv, parquet FEATURE COMPLETE Q4
Squid store for BigQuery FEATURE COMPLETE Q1
Ink! v4 + state queries FEATURE COMPLETE Q1
HTTPS RPC clients DONE Q1
EVM archives for Moonbase, Moonbeam, Moonriver DONE Q1
Unified codebase for Substrate and EVM FEATURE COMPLETE Q1
Unfinalized (real-time) blocks indexing PLANNED Q1
Redesign of Squid CLI PLANNED Q1

Aquaruim (Squid Hosted Service)

Feature Status Release Target Comments
Deployment manifest DONE Q4
Aquarium Redesign FEATURE COMPLETE Q4
Squid hibernation DONE Q4
Squid scaling and dedicated resources DONE Q1
Direct access to Squid Postgres FEATURE COMPLETE Q1
Native integration with RPC providers PLANNING Q1
Live squid metrics PLANNING Q1
Team-managed accounts PLANNING Q1

Integrations

Feature Status Release Target Comments
Migration to Fire Squid DONE Fire Squid
Giant Squid API IN PROGRESS Q1

Miscellaneous

Feature Status Release Target Comments
Docs redesign DONE Q3
Refactoring IN PROGRESS Q1

Processor crashing with ECONNRESET

Describe the bug
While the indexer has been running for several days, I realized it crashed earlier with the error displayed below.
I relaunched it and it's working well, as before.

To Reproduce
You won't be able to reproduce it right away, but I'm using the squid available at https://github.com/ChainSafe/Multix/tree/main/squid

  • Make process
  • Make serve in another terminal
  • see errors below, both in the graphql server and the processor

Expected behavior
No crash

Screenshots
earlier warnings in the processor:

11:12:53 WARN  sqd:processor:archive-request retry                                                                                                                                                         
                                             archiveUrl: https://rococo.archive.subsquid.io/graphql                                                                                                        
                                             archiveRequestId: 992                                                                                                                                         
                                             archiveQuery: query { status { head } }                                                                                                                       
                                             backoff: 5000                                                                                                                                                 
                                             reason: request to https://rococo.archive.subsquid.io/graphql failed, reason: getaddrinfo EAI_AGAIN rococo.archive.subsquid.io                                

11:14:56 WARN  sqd:processor:archive-request retry                                                                                                                                                         
                                             archiveUrl: https://rococo.archive.subsquid.io/graphql                                                                                                        
                                             archiveRequestId: 33                                                                                                                                          
                                             archiveQuery: query {                                                                                                                                         
                                                             status {                                                                                                                                      
                                                               head                                                                                                                                        
                                                             }                                                                                                                                             
                                                             batch(fromBlock: 4011039, toBlock: 4011039, includeAllBlocks: false, events: [{name: "Proxy.PureCreated", data: {event: {args: true, extrinsic
: {hash: true, fee: true}}}}], calls: [{name: "System.remark", data: {call: {args: true, origin: true}}}, {name: "Balances.transfer_keep_alive", data: {call: {args: true, origin: true}}}, {name: "Multisi
g.as_multi_threshold_1"}, {name: "Proxy.proxy", data: {call: {args: true, origin: true}}}, {name: "Multisig.approve_as_multi"}, {name: "Multisig.cancel_as_multi"}, {name: "Multisig.as_multi"}]) {        
                                                               header {                                                                                                                                    
                                                                 id                                                                                                                                        
                                                                 height                                                                                                                                    
                                                                 hash                                                                                                                                      
                                                                 parentHash                                                                                                                                
                                                                 timestamp                                                                                                                                 
                                                                 specId                                                                                                                                    
                                                                 stateRoot                                                                                                                                 
                                                                 extrinsicsRoot                                                                                                                            
                                                                 validator                                                                                                                                 
                                                               }                                                                                                                                           
                                                               events                                                                                                                                      
                                                               calls                                                                                                                                       
                                                               extrinsics                                                                                                                                  
                                                             }                                                                                                                                             
                                                           }                                                                                                                                               
                                                                                                                                                                                                           
                                             backoff: 100                                                                                                                                                  
                                             reason: request to https://rococo.archive.subsquid.io/graphql failed, reason: getaddrinfo EAI_AGAIN rococo.archive.subsquid.io                                
11:14:57 INFO  sqd:processor 4011039 / 4011039, rate: 0 blocks/sec, mapping: 0 blocks/sec, 0 items/sec, ingest: 0 blocks/sec, eta: 0s                                                                      

Error in the processor:

11:19:39 FATAL sqd:processor Error: read ECONNRESET                                                  
                                 at TCP.onStreamRead (node:internal/stream_base_commons:217:20)                                                                                                            
                             err:                 
                               errno: -104        
                               code: ECONNRESET   
                               syscall: read      
make: *** [Makefile:2: process] Error 1      

Error in the graphql server:

ERROR sqd:graphql-server Connection terminated unexpectedly                                                                                                                                       
                                                                                                                                                                                                           
                                  GraphQL request:3:3                                                                                                                                                      
                                  2 |     query MultisigsByAccounts($accounts: [String!]) {                                                                                                                
                                  3 |   multisigs(where: {signers_some: {signer: {id_in: $accounts}}}) {                                                                                                   
                                    |   ^                                                                                                                                                                  
                                  4 |     id 
                                  [... shortening it as not relevant]
node:events:490                                   
      throw er; // Unhandled 'error' event        
      ^                                           

Error: Connection terminated unexpectedly         
    at Connection.<anonymous> (/root/Multix/squid/node_modules/pg/lib/client.js:132:73)              
    at Object.onceWrapper (node:events:626:28)    
    at Connection.emit (node:events:512:28)       
    at Socket.<anonymous> (/root/Multix/squid/node_modules/pg/lib/connection.js:107:12)              
    at Socket.emit (node:events:524:35)           
    at endReadableNT (node:internal/streams/readable:1359:12)                                        
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)                   
Emitted 'error' event on BoundPool instance at:                                                      
    at Client.idleListener (/root/Multix/squid/node_modules/pg-pool/index.js:57:10)                  
    at Client.emit (node:events:512:28)           
    at Client._handleErrorEvent (/root/Multix/squid/node_modules/pg/lib/client.js:319:10)            
    at Connection.<anonymous> (/root/Multix/squid/node_modules/pg/lib/client.js:149:16)              
    at Object.onceWrapper (node:events:626:28)    
    [... lines matching original stack trace ...]                                                    
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {     
  [... shortening it as not relevant]

Environment (please complete the following information):

  • specify the version of the core packages (@subsquid/cli, @subsquid/substrate-processor, ...)
    "@subsquid/archive-registry": "1.0.15",
   "@subsquid/graphql-server": "3.2.3",
   "@subsquid/ss58": "0.1.2",
   "@subsquid/substrate-processor": "2.0.1",
   "@subsquid/typeorm-migration": "0.1.3",
   "@subsquid/typeorm-store": "0.1.5",

Applicable to decoding issues:

  • chain name: Rococo
  • typesBundle (if it's not built-in)
    optional:
  • block
  • extrinsic
  • call
  • event
  • spec version
  • endpoint

Additional context
Add any other context about the problem here.

How to fetch events from any contract?

Example: how to fetch Transfer events from every ERC20 from a chain without having to specify one contract.

I can reproduce this flow with the-graph but not with squid. Is there any example?
Thanks!

Error encountered with "wrapAccessError" message while decoding ABI for event with property "owner"

Describe the bug
I encountered an error in my program named "wrapAccessError". The error message displayed "Error: deferred error during ABI decoding triggered accessing property "owner"". This error confuses me as the same event has been processed multiple times before, but this one failed.
To Reproduce
Steps to reproduce the behavior:

Use the following data for testing:
id: 0005648711-000174-018c3
address: 0x314159265dd8dbb310642f98f50c066173c1259b
data: 0x6330363834636235336331363831343865616130313363333864316330663339
index: 174
topics: ["0xce0457fe73731f824cc272376169235128c118b49d344817417c6d108d155e82","0x8352da3d0ebe15fd4bd7def280458b52cdb17c9c50ef26bed05f77a09a37033d","0x6630353636393265383962393531363132383865306565373939353535636238"]
transactionIndex: 102

Decode the data structure:
[node: string, label: string, owner: string] & {
node: string;
label: string;
owner: string;
}

The error "wrapAccessError" occurs with the message "Error: deferred error during ABI decoding triggered accessing property "owner""

Expected behavior
I expected the event to be processed successfully like the previous times.

Screenshots
N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.