Giter Club home page Giter Club logo

lumos's People

Contributors

bitrocks avatar classicalliu avatar dependabot[bot] avatar doitian avatar funfungho avatar github-actions[bot] avatar gpblockchain avatar homura avatar ironlu233 avatar kennytian avatar kuzirashi avatar liriu avatar liubq7 avatar painterpuppets avatar phroi avatar sighwang avatar stwith avatar xxuejie avatar zhangyouxin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lumos's Issues

Confusing helper function names `generateAddress` and `parseAddress`

A minor point, but I personally find the function names generateAddress and parseAddress in helpers to be confusing. I found myself forgetting what they do and having to look it up a few times.

It isn't intuitive for me that generating an address takes in a Script as input, or that parsing an address results in a Script.

I would consider renaming them something like scriptToAddress and addressToScript if things are still at an early enough stage to do that.

Curious about others' thoughts.

Move to plain HTTP library to avoid quirks in jsonrpc-client

Right now lumos indexer uses JSONRPC client library as used in ckb-indexer. But one problem with this library, is that it is hardcoded with hyper::rt, which makes it impossible to swap runtime. To make matters worse, hyper::rt uses a tokio in a mode that forbids running on a single core machine.

Since we only need JSONRPC for one API, it won't be a problem for us to build the specific code we need on a plain HTTP library, which would has better async/await support, such as surf.

Support connecting node via https

When trying to connect the lumos indexer to the node https://lina.ckb.dev, but it throws the following error:
Screen Shot 2020-08-04 at 1 34 41 PM

I guess it is because lumos indexer currently doesn't support nodes exposed via HTTPs protocol.

[feature request] more JSer-friendly Collector for Lumos

Motivation

As a JS developer, I'm actually more comfortable using functions such as map, filter, reduce, etc. to handle Iterable rather than for...in / for...of. And the AsyncIterator exposed by Lumos is much more uncommon. i'm more used to using Promise for async

https://github.com/nervosnetwork/lumos/blob/d5e7636736e22c3247d36b4160213e74ed23dd0e/packages/ckb-indexer/src/collector.ts#L284

How

Add a new method takeWhileAggregate for Collector like this

declare class CellCollector {
  takeWhileAggregate<T>(
    initialValue: T, 
    aggregate: (aggr: T, cell: Cell, currentIndex: number) => T, 
    takeWhile: (aggr: T, cell: Cell, currentIndex: number) => boolean,
    // defaults to false. If true, exclude the last takeWhile's `() => false` collected element
    excludeLast?: boolean,
  ): Promise<T>;
}

Examples

Collect Lock-only Cells and Calculate Total Capacity

const total = await new CellCollector(...).takeWhileAggregate(
  BI.from(0),
  (sum, cell) => sum.add(cell.cellOutput.capacity),
  () => true,
);

Collect 10,000 Unit of sUDT Cells

const { cells, amount } = await new CellCollector(...).takeWhileAggregate(
  { cells: [] as Cell[], amount: BI.from(0) },
  ({ cells, amount }, cell) => ({
    cells: cells.concat(cell),
    amount: amount.add(Uint128.unpack(cell.data)),
  }),
  ({ amount }) => amount.lt(10000)
)

if (amount.lt(10000)) {
  throw new Error('not enough sUDT, expected over 10000, actual ' + amount )
}

Lumos Works with CKB2021

Migration from CKB2019 to CKB2021

The latest [email protected]+ is released. To work with CKB2021, lumos has some change

  • extra_hash replaces the uncles_hash when (de)serialize block
  • added new hash_type data1 for vm version selection
  • new full address format

Install

npm install @ckb-lumos/lumos

New Full Address Format

CKB2021 hard-fork will use a new full address format, which is already supported in the latest version of lumos,
and it is important to note that the addresses generated using the new full address format are not the same as the old ones.

If your app is still using the old full address format but working with CKB2021 without VM version selection, try to initialize config like this

- import { generateAddress } from '@ckb-lumos/helpers'
+ import { encodeToAddress } from '@ckb-lumos/helpers'
...
- generateAddress(...)
+ encodeToAddress(...)

[feature request] Support OmniLock in common-script

Motivation

OmniLock is a widely used lock on the CKB chain. It is better to support omniLock so that users can easily forge an omnilock transfer transaction.

// omniLock.ts
export type OmniLockFromInfo = {
  address?: string;
  pubkeyAndType?: {
    pubkey: string;
    type: "ETH" | "SECP256K1"
  }
}
export async function transfer(
  txSkeleton: TransactionSkeletonType,
  fromInfo: OmniLockFromInfo,
  toAddress: Address,
  capacity: BIish,
  ...
): Promise<TransactionSkeletonType> 

Preview

const { omniLock } = require("@ckb-lumos/common-scripts")
const { sealTransaction } = require("@ckb-lumos/helpers")

let txSkeleton = TransactionSkeleton({ cellProvider: indexer })

txSkeleton = await omniLock.transfer(
  txSkeleton,
  fromInfos, // OmniLockFromInfo
  "ckb1qyqrdsefa43s6m882pcj53m4gdnj4k440axqdt9rtd",
  BigInt(3500 * 10 ** 8),
)

// Signing messages will fill in `txSkeleton.signingEntries`.
txSkeleton = await omniLock.prepareSigningEntries(
  txSkeleton
)

// Then you can sign messages in order and get signatures.
// Call `sealTransaction` to get a transaction.
const tx = sealTransaction(txSkeleton, signatures)

Packages Affected

  • packages/common-scripts

If you use Secp256k1Transfer to transfer the ckb test address, the transfer can be successful, but if you change it to the mainnet address, an error will be reported. Please help

Use the provided case to try to transfer ckb. The test found that the test network can be successful, but sometimes an error is reported. When switching to the main network, the transfer will report an error, please take a look, thank you

Mainnet returns correct mainnet balance

import {
  Indexer,
  helpers,
  Address,
  Script,
  RPC,
  hd,
  config,
  Cell,
  commons,
  core,
  WitnessArgs,
  toolkit,
  BI,
  CellCollector
} from "@ckb-lumos/lumos";
import CKB from "@nervosnetwork/ckb-sdk-core"

import { values } from "@ckb-lumos/base";
const { ScriptValue } = values;

export const { AGGRON4,LINA } = config.predefined;
console.log(AGGRON4,"AGGRON4____")

const RPC_NETWORK = LINA


// https://mainnet.ckb.dev 
// https://testnet.ckb.dev

const CKB_RPC_URL = "https://mainnet.ckb.dev/rpc";

const CKB_INDEXER_URL = "https://mainnet.ckb.dev/indexer";
const rpc = new RPC(CKB_RPC_URL);
const indexer = new Indexer(CKB_INDEXER_URL, CKB_RPC_URL);

type Account = {
  lockScript: Script;
  address: Address;
  pubKey: string;
};
export const generateAccountFromPrivateKey = (privKey: string): Account => {
  const pubKey = hd.key.privateToPublic(privKey);
  console.log(pubKey,"pubKey___")
  const args = hd.key.publicKeyToBlake160(pubKey);
  console.log(args,"args___")

  const template = RPC_NETWORK.SCRIPTS["SECP256K1_BLAKE160"]!;
  console.log(template,"template_____")
  const lockScript = {
    code_hash: template.CODE_HASH,
    hash_type: template.HASH_TYPE,
    args: args,
  };
  const address = helpers.generateAddress(lockScript, { config: RPC_NETWORK });
  console.log(address,"address____")
  return {
    lockScript,
    address,
    pubKey,
  };
};

export async function capacityOf(address: string): Promise<BI> {
  const collector = indexer.collector({
    lock: helpers.parseAddress(address, { config: RPC_NETWORK }),
  });

  console.log(collector,"collector___")

  let balance = BI.from(0);
  console.log(balance,"balance___")

  for await (const cell of collector.collect()) {
    balance = balance.add(cell.cell_output.capacity);
    console.log(cell.cell_output.capacity,"cell.cell_output.capacity_____")
  }

  return balance;
}

interface Options {
  from: string;
  to: string;
  amount: string;
  privKey: string;
}

// amount, from: fromAddr, to: toAddr, privKey
export async function transfer(options: Options): Promise<string> {
  let txSkeleton = helpers.TransactionSkeleton({});
  const fromScript = helpers.parseAddress(options.from, { config: RPC_NETWORK });
  const toScript = helpers.parseAddress(options.to, { config: RPC_NETWORK });
  console.log(txSkeleton,"txSkeleton____")
  console.log(fromScript,"fromScript___")
  console.log(toScript,"toScript___")
  

  // additional 0.001 ckb for tx fee
  // the tx fee could calculated by tx size
  // this is just a simple example

  const neededCapacity = BI.from(options.amount).add(100000);
  console.log(neededCapacity,"neededCapacity_____")
  let collectedSum = BI.from(0);
  const collected: Cell[] = [];
  const collector = indexer.collector({ lock: fromScript, type: "empty" });
  for await (const cell of collector.collect()) {
    collectedSum = collectedSum.add(cell.cell_output.capacity);
    collected.push(cell);
    if (collectedSum >= neededCapacity) break;
  }

  if (collectedSum < neededCapacity) {
    throw new Error("Not enough CKB");
  }

  const transferOutput: Cell = {
    cell_output: {
      capacity: BI.from(options.amount).toHexString(),
      lock: toScript,
    },
    data: "0x",
  };


  const changeOutput: Cell = {
    cell_output: {
      capacity: collectedSum.sub(neededCapacity).toHexString(),
      lock: fromScript,
    },
    data: "0x",
  }


  txSkeleton = txSkeleton.update("inputs", (inputs) => inputs.push(...collected));
  txSkeleton = txSkeleton.update("outputs", (outputs) => outputs.push(transferOutput, changeOutput));
  txSkeleton = txSkeleton.update("cellDeps", (cellDeps) =>
    cellDeps.push({
      out_point: {
        tx_hash: RPC_NETWORK.SCRIPTS.SECP256K1_BLAKE160.TX_HASH,
        index: RPC_NETWORK.SCRIPTS.SECP256K1_BLAKE160.INDEX,
      },
      dep_type: RPC_NETWORK.SCRIPTS.SECP256K1_BLAKE160.DEP_TYPE,
    })
  );
  

  const firstIndex = txSkeleton
    .get("inputs")
    .findIndex((input) =>
      new ScriptValue(input.cell_output.lock, { validate: false }).equals(
        new ScriptValue(fromScript, { validate: false })
      )
    );
  if (firstIndex !== -1) {
    while (firstIndex >= txSkeleton.get("witnesses").size) {
      txSkeleton = txSkeleton.update("witnesses", (witnesses) => witnesses.push("0x"));
    }
    let witness: string = txSkeleton.get("witnesses").get(firstIndex)!;
    const newWitnessArgs: WitnessArgs = {
      /* 65-byte zeros in hex */
      lock:
        "0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
    };
    if (witness !== "0x") {
      const witnessArgs = new core.WitnessArgs(new toolkit.Reader(witness));
      const lock = witnessArgs.getLock();
      if (lock.hasValue() && new toolkit.Reader(lock.value().raw()).serializeJson() !== newWitnessArgs.lock) {
        throw new Error("Lock field in first witness is set aside for signature!");
      }
      const inputType = witnessArgs.getInputType();
      if (inputType.hasValue()) {
        newWitnessArgs.input_type = new toolkit.Reader(inputType.value().raw()).serializeJson();
      }
      const outputType = witnessArgs.getOutputType();
      if (outputType.hasValue()) {
        newWitnessArgs.output_type = new toolkit.Reader(outputType.value().raw()).serializeJson();
      }
    }
    witness = new toolkit.Reader(
      core.SerializeWitnessArgs(toolkit.normalizers.NormalizeWitnessArgs(newWitnessArgs))
    ).serializeJson();
    txSkeleton = txSkeleton.update("witnesses", (witnesses) => witnesses.set(firstIndex, witness));
  }

  console.log(await commons.common.prepareSigningEntries(txSkeleton),"commons.common.prepareSigningEntries____")

  txSkeleton = await commons.common.prepareSigningEntries(txSkeleton);
  const message = txSkeleton.get("signingEntries").get(0)?.message;
  const Sig = hd.key.signRecoverable(message!, options.privKey);
  const tx = helpers.sealTransaction(txSkeleton, [Sig]);

  console.log(tx,"tx______")
  // return ""
  const hash = await rpc.send_transaction(tx, "passthrough");
  console.log("The transaction hash is", hash);

  return hash;
}

image

image

Migrate helpers and common-scripts into TypeScript

Right now lumos is written in pure JavaScript, but as logics are getting more and more complicated, the maintenance burden also arises. We might want to move the 2 slightly complicated packages, helpers and common-scripts into TypeScript to leverage internal type checking.

Notice the fundamental design principle of lumos is not changed: it is designed for both TypeScript and JavaScript usage, and no security should be affected when a developer chooses JavaScript over TypeScript. What we are talking here, is simply the reliance of TypeScript for internal type logic checking.

[idea] Transaction validator for P2PKH signing message

Motivation

The CKB contract is very flexible and can theoretically support any form of HMAC, so it can support signature verification on any blockchain in CKB, therefore, CKB can use other blockchain wallets as CKB's wallet, e.g. Ethereum / Cardano etc.

The user experience of these wallets is excellent, the transaction will be displayed when signing. For example, when we transfer ERC20, MetaMask will show the recipient address, type of the token and token amount, this is because MetaMask is able to recognize ERC20 transactions. However, MetaMask doesn't recognize CKB's transactions, so we can only use an API like personal_sign to sign the hashed transaction. The hashed transactions are unreadable for humans.

Idea

We can provide a validateSigningMessage method to check diget == hash(tx | witness), this helps the user to verify that a transaction is tampered.

type HashAlgorithm = 
  | 'ckb-blake2b-256'
  | 'ckb-blake2b-160' 
  | 'sha256' 
  | 'ripemd-160' 
  | 'sha3'
  | '...'
  | ((message: BytesLike) => Uint8Array);

interface Preimage {
  rawTransaction: RawTransaction;
  // witness
  additionalPayload: BytesLike;
}

declare function validateSigningMessage(
  preimage: Preimage,
  messageForSigning: BytesLike,
  options?: {
    hashAlgorithm: HashAlgorithm;
  }
): boolean

Wallet gateway

// TODO a wallet gateway
type SignatureAlgorithms = 
  | 'eth_personal_sign' 
  | 'eth_signTypedData_v4' 
  | 'cardano_personal_sign'
  | '...';

type PubkeyLike =
  // public key 
  | BytesLike 
  // public key hash
  | { hashAlgorithm: HashAlgorithm, pubkeyHash: BytesLike }

Todo - Explain additionalPayload

Client-side TX Generation

Tl;dr:

  • With a few changes in Lumos, we could do TX generation client-side, accessing indexed cells via a simple server.
  • This indexer server could become a standard infrastructure component, like a node.
  • This way dApp developers wouldn’t have to make their own server if their app didn't have intensive bandwidth or compute requirements. It also increases decentralization and could make learning easier.

Client-side TX Generation in Lumos

I’m thinking about reducing development work and increasing decentralization for the dApps that are built with Lumos framework. Especially for people just getting familiar with the Nervos platform.

Lumos, as currently designed, encourages each dApp developer to write a custom server that contains the bulk of the logic for their dApp. This is because Lumos is intended such that the entity that handles TX generation needs to be the same as the entity running the indexer.

I’ll show how the flow of a transaction looks currently on lumos, and show a model where things happen in a heavier client, freeing the dApp developer from having to write, run, and maintain a second application (the custom server). The lack of a custom server intermediary also potentially improves decentralization.

Current Flow: Tx Generation on Server

The flow of a user action looks something like this:

  • The user expresses an intent to perform an action on the client.
  • The client asks the dApp server to generate a TX, in response to this action.
  • The dApp server does this, using a locally running indexer to get cells.
  • The server forwards the raw TX to the client.
  • The client acquires signatures from the wallet.
  • The TX + signatures are forwarded to the server for sending to the blockchain.
  • The server or client may also want to keep track of the transaction hash to see the TX status on-chain. This can be forwarded from the server.

current model

Additionally, read requests (such as for cells or balances) are routed through the server.

Alternative: TX Generation on Client

The dApp client could do TX generation on its’ own, if it had access to an indexer to query for cells. A simple, stateless server running the Lumos indexer could provide a source to query directly for cells. In this model, you can think of the indexer as an infrastructure service, akin to a node, that can be run by anyone. This stands in contract to a custom server.

In this model, an action flow would be:

* The user signals an action on the client.
* The client generates an appropriate transaction, using cells queried from the indexer server.
* The client gathers required signatures from the wallet.
* The client sends the transaction directly to a CKB node.
* The client can track the status of the transaction via a subscription to a CKB node.

alternative model

The client could also collect cells for displaying balances from an indexer, or perhaps get this information from the wallet for the active user.

Performance

I don’t expect that bandwidth or compute requirements will be significantly increased for a large percentage of dApps (Compared to a web2 app with similar amounts of data, or the dApp server model). Some applications that deal with a lot of data at once, such as analytics platforms, or that require intensive TX generation, will likely benefit hugely from the dApp server.

Caching information

Derived information is essential for users to make sense of what’s happening on CKB. There should be a client-side caching layer that’s easy to set up with this model.

  • Gather certain cells on load to form an initial data set, such as user balances.
  • Keep track of changes, and apply them to the existing data, as new blocks arise.

Open Transactions: A Cavent

For dApps that involve open transactions, there will still need to be some sort of aggregator (custom service) that holds state about the current transaction fragments.

  • The dApp owner can provide this service, because they care about the app functioning.
  • This could also be more decentralized with IPFS-hosted data + incentivized aggregators.

HD wallet manager

This issue will consist of 2 tasks:

  • Research for a proper way of managing secrets
  • HD wallet manager in lumos

Fail to compile when using indexer

env:
os: macos big sur 11.2.3
node: 14.15.0
version: @ckb-lumos/indexer@^0.18.0-rc1

my project was created via create-react-app, when I ran following code:

const { Indexer } = require("@CKB-lumos/indexer");
const indexer = new Indexer("http://127.0.0.1:9114", "/tmp/indexed-data");
indexer.startForever();

and this produces following error:

snapshot

Suggestions for new indexer APIs

Based on my understanding, currently the Lumos indexer mainly has two APIs for querying the live cells and all the transactions. As reported in the issue, when there is a huge volume of transactions under a lock script, it will take a very long time for Lumos to process and response. In that case, it took more than 2 hours to return all the transactions under a lock script on my machine.

Because of the needs of flexibly querying the transactions and cells, either live or dead, applications, such as Neuron, would still need to cache the data in a certain way while utilizing Lumos as the basic indexing engine, which helps maintain the full index with small disk space footprint and stream the data based on lock script to the client-side.

Therefore, to fulfill the requirements for certain applications, such as Neuron, some new APIs will be needed to work around the performance issues.

Below are some suggestions on the new APIs on Lumos that I think would be helpful in providing more flexibility for the applications to efficiently process necessary transactions and be able to customize the queries that are easier to divide and conquer. Please feel free to comment or suggest other solutions.

  1. #listenByLockScript provides an event sourcing API for the applications to listen to new transactions under a lock script via WebSocket.

This would allow the applications to process the new transactions as soon as possible without the need to poll and check if there are new transactions under the monitored lock scripts recorded on-chain, simplifying the codebase in the application layer while avoiding unnecessary performance overheads introduced by the potential polls, which would need to check block numbers / compare the transaction counts between indexer and the ones in local cache.

In addition to new transactions event, it would be also helpful to emit the forked event when there is a fork occurred, so as to notify the application layer in order for them to do necessary rollbacks in their local caches.

  1. #getTotalTransactionsCountByLockScript allows to fetch the total number of transactions under a lock script.

This will be useful when the applications want to compare with the local cache, and check if it is necessary to sync the new transactions in their cache after disconnected the network for a while.

  1. #getTransactionsByLockScript(offset, fetchSize) basically supports the pagination when fetching transactions under a lock script. With the #getTotalTransactionsCountByLockScript suggested above, the applications can customize fetching patterns either in parallel fashion in a batch or just a page of a certain size of transactions.

  2. #getTransactionsByLockScript(fromBlock, toBlock) allows querying the transactions under a lock script between a block number range.

SQL indexer based on Rust

Right now we do have a JS based SQL indexer, the problem with this, is that it might not be the best way to extract all the computing power of a machine. We might want to build a different indexer, that is powered by Rust, so we can better leverage native code to speed up block executions, while at the same time, enjoy the benefits in a modern multi-core machine.

Order transactionsCollector#collect by block number

I just noticed the transactions returned from transactionsCollector#collect are currently not ordered by block number.

If this observation is valid, do you think it makes sense to return the transactions from this API ordered by block number, especially when the upcoming support of offset will make it feasible for page jump?

Documentation for newcomer

Motivation

As a newcomer to CKB, I'd like to get a primer on CKB and Lumos, it would be helpful to have a document that tells me how to use Lumos to interact with CKB

How

It would be very helpful to have a tutorial, which could be structured like

  • Creating a wallet
  • Claiming the test CKB from the test network faucet
  • Checking the CKB balance
  • Sending my first transaction, which could be a simple CKB transfer (in fact, for the UTxO model, the transfer is not as intuitive as for the account model, so it is not quite as simple)
  • Creating a token (similar to ERC20)
  • Transferring token to Alice
  • Checking Alice's token balance

Further split lock and type in TransactionCollector

Right now TransactionCollector uses the same queries lock and type as CellCollector. While they are enough for cells, the granularity would be too big for transactions. As a result, we would want to split lock and type into the following queries:

  • input_lock: transactions whose input cells use specified lock
  • output_lock: transactions whose output cells use specified lock
  • input_type: transactions whose input cells use specified type
  • output_type: transactions whose output cells use specified type.

Account model support

We already have an account model design in CKB, which is used to build polyjuice. Here we plan to integrate it together with lumos, so lumos can enjoy a simple account layer model, for dapps that would need this feature.

Query with argsLen

I am trying to query cells using the argsLen filter with the following script:

const lockScript = {
    code_hash: '0x5c5069eb0857efc65e1bca0c07df34c31663b3622fd3876c876320fc9634e2a8',
    hash_type: 'type',
    args: '0x405ee3f9b00b2df390ff04669967c70c90914c2b'
};

const cellCollector = new CellCollector(indexer, {
    lock: lockScript,
    argsLen: 20
});

const cells = [];
for await (const cell of cellCollector.collect()) 
    cells.push(cell);

But it is not able to locate the cells that are already on-chain. https://explorer.nervos.org/aggron/transaction/0x6b6a67af7b9401b937e6985e30c0b74afbf1bd3bf758161be6b9f7584678261c#0

Is it due to incorrect query parameters, or something else?

transfer Lumos to ckb-js org

ckb-js org will be a collection of JavaScript-based toolkit for CKB.
The target users of Lumos are JavaScript developers, and we will migrate Lumos to ckb-js org.

GitHub will auto redirect the repo to the new URL after the repo is transferred, so we should avoid creating a new Lumos repo under nervosnetwork org

Before the transfer, we need to

  • create a branch and replace the GitHub repo URL from nervosnetwork/lumos to ckb-js/lumos in Lumos

After the transfer, we need to

  • Codecov is working correctly
  • Checking the CI is working correctly
  • Migrate the website
  • Enable GitHub project to manage Lumos

[feature request] Parsing molecule schema to codec directly

Motivation

As a dApp developer, i often explore some transactions in the ckb explorer to learn how a dApp works, but the script.args/cell.data/transaction.witness in the ckb explorer is serialized and represented as a hex string, most of this data is serialized from molecule. It would be very helpful to have a tool that would allow me to input the schema and these hex strings for decode, so I could understand the transaction

How

We can offer a new API

type CodecRecord = Record<string, AnyCodec>;

declare function parseMolecule<
  Ref extends CodecRecord = CodecRecord, 
  Ret extends CodecRecord = CodecRecord
>(
  // molecule schema string
  mol: string, 
  // referenceable codec for mol, e.g. codecs.numbers, base.blockchain
  options?: { refs: Ref }
): Ret:
// language=molecule
const mol = `
array RGB (Uint8, 3);

struct Styled {
  background: RGB,
  color: RGB,
}
`;

import { number } from '@ckb-lumos/codec'

const codecs = parseMolecule(mol, { references: ...number /* referenceable codecs */})

After implement, we can offer a toolkit on the Lumos website to helps user to decode molecule binary

image

Cell/transaction notification

Right now, lumos only supports querying for indexed cells, if one wants to know if new cells following certain query is created, the only way right now, would be periodically poll for new changes.

Here we want to design a new notification mechanism for notifying when a new cell/transaction is created. We were thinking about providing the following 2 APIs for lumos indexer(including both RocksDB indexer, and SQL indexer):

class Indexer {
  // existing methods are omitted

  subscribeForCells(queries: QueryOptions): EventEmitter<Cell>;
  subscribeForTransactions(queries: TransactionCollectorOptions): EventEmitter<Transaction>;
}

See here for details on EventEmitter, notice the actual type definition for EventEmitter might differ slightly. Here it is for illustration purpose only.

When new cell or transaction is indexed, notification will be sent via EventEmitter interface to the subscriber.

Notice this is not a reliable queue, if no EventEmitter is present, indexer would not wait for new EventEmitter to be attached. The user of this feature should always run a collector to fetch latest data when a new event is received.

Transaction collector bugfix

Hello! I am currently using lumos to develop a nervos wallet and I found a bug in the transaction collector of ckb-indexer package which I think I could fix correctly. I will be explaining here when it happens and what I did to fix it.

I also created a PR fixing this issue: Transaction collector input fix

I stumbled upon this bug using transaction collector to fetch the transactions of my test address ckt1qyqdlj9dyxgh3gc8lughae8zlemqe5vyzznsnmhy7w more precisely because of the transaction 0x87c3586cc91ec7a1a97407456dba0adb34c5781c6031cb71bfb175939af4e6c6.

The file in which the error was thrown is transaction_collector.ts.
An error would throw saying can not read lock of undefined. As I will explain I think I found the root of the bug. It originated in this part of the code:

121    if (txIoTypeInputOutPointList.length > 0) {
122      await services
123        .requestBatch(this.CKBRpcUrl, txIoTypeInputOutPointList)
124        .then((response: GetTransactionRPCResult[]) => {
125          response.forEach((item: GetTransactionRPCResult) => {
126            const itemId = item.id.toString();
127            const [cellIndex, transactionHash] = itemId.split("-");
128            const output: Output =
129              item.result.transaction.outputs[parseInt(cellIndex)];
130            const targetTx = transactionList.find(
131              (tx) => tx.transaction.hash === transactionHash
132            );
133            if (targetTx) {
134              targetTx.inputCell = output;
135            }
136          });
137        });
138    }
139
140    //filter by ScriptWrapper.argsLen
141    transactionList = transactionList.filter(
142      (transactionWrapper: TransactionWithIOType) => {
143        if (
144          transactionWrapper.ioType === "input" &&
145          transactionWrapper.inputCell
146        ) {
147          return this.isCellScriptArgsValid(transactionWrapper.inputCell);
148        } else {
149          const targetCell: Output =
150            transactionWrapper.transaction.outputs[
151              parseInt(transactionWrapper.ioIndex)
152            ];
153          return this.isCellScriptArgsValid(targetCell);
154        }
155      }
156    );

The problem was that line 153 would call the function isCellScriptArgsValid with targetCell undefined. After that when the function tried to get the property lock of that targetCell it would fail. This is due to two thing mainly. The first one is that the transactionList contained more than 1 transaction of type input same transaction hash and different inputIndex. As a result in line 130 only finds the first transaction with that hash in the list and does not populate the field inputCell of all possible combinations. Later on on the code because of the if in the line 143 as it had no inputCell even though was of input type it would go to the else and try to get the transactionWrapper.transaction.outputs[2] as the transaction was looking for the input with index 2 and it looked in the outputs of index 2 when there are only 2 outputs instead of 3 as you can see in the transaction I linked earlier ( 0x87c3586cc91ec7a1a97407456dba0adb34c5781c6031cb71bfb175939af4e6c6),

The solution is short and easy to implement:

105    //get input cell transaction batch
106    const txIoTypeInputOutPointList: any[] = [];
107    transactionList.forEach((transactionWrapper) => {
108      if (transactionWrapper.ioType === "input") {
109        const targetOutPoint: OutPoint =
110          transactionWrapper.transaction.inputs[
111            parseInt(transactionWrapper.ioIndex)
112          ].previous_output;
113        const id =
114          targetOutPoint.index + "-" + transactionWrapper.transaction.hash;
115        if (!txIoTypeInputOutPointList.some((txReq) => txReq.id === id)) {
116          txIoTypeInputOutPointList.push({
117            id,
118            jsonrpc: "2.0",
119            method: "get_transaction",
120            params: [targetOutPoint.tx_hash],
121          });
122        }
123      }
124    });
125    if (txIoTypeInputOutPointList.length > 0) {
126      await services
127        .requestBatch(this.CKBRpcUrl, txIoTypeInputOutPointList)
128        .then((response: GetTransactionRPCResult[]) => {
129          response.forEach((item: GetTransactionRPCResult) => {
130            const itemId = item.id.toString();
131            const [cellIndex, transactionHash] = itemId.split("-");
132            const output: Output =
133              item.result.transaction.outputs[parseInt(cellIndex)];
134            const targetTxs = transactionList.filter(
135              (tx) =>
136                tx.transaction.hash === transactionHash &&
137                tx.ioType === "input" &&
138                tx.transaction.inputs[parseInt(tx.ioIndex)].previous_output
139                  .index === cellIndex
140            );
141            targetTxs.forEach((targetTx) => (targetTx.inputCell = output));
142          });
143        });
144    }

Firstly I implemented a check to not put same transaction twice in line 115. Later, as you can see in line 134, we filter for all the transactions with the same hash of ioType input (to not set inputCell on output transactions as we do not need it) and with the same transaction previous output index and then we set their inputCell.

I'd like if it could be implemented and pushed as it would improve the code in some instances.

[problem] production build of website

Problem:

Screenshot of the problem:

image

Not a clue for now, maybe it's a problem introduced by @ckb-lumos/molecule

PR: #413
Code: https://github.com/zhangyouxin/lumos/tree/molecule

Steps to reproduce:

# checkout code
git remote add zhangyouxin-lumos [email protected]:zhangyouxin/lumos.git
git fetch zhangyouin-lumos
git checkout zhangyouxin-lumos/molecule

# build lumos
yarn install --force
yarn clean
yarn build

# launch website on local machine
cd website
# yarn start works fine, visit http://localhost:3000/tools/molecule-parser to try it out.
# produces problem below
yarn build
yarn serve

Then visit http://localhost:3000/tools/molecule-parser and open the console to check the problem.

The same code works fine in create-react-app environment.

[feature request] validator for codec when packing the object

Motivation

When I call the codec.pack, especially when packing a large or nested object, i'm hard to troubleshoot which part of the object is incorrect, and it would be much easier to debug if the pack could tell me exactly which part is incorrect

How

const RGB = array(Uint8, 3)

//       overflow
//         👇
RGB.pack([0xfff, 0xff, 0xff]) // [0] is incorrect, expected ..., actual ...
const RGB = array(Uint8, 3)
const Styled = struct({
  background: RGB,
  borderColor: RGB
}, ['background', 'borderColor']);

Styled.pack({
  //          overflow
  //             👇
  background: [0xfff, 0xff, 0x00],
  borderColor: [0x12, 0x34, 0x56],
}) // error: background.[0] is incorrect, expected ..., actual ...

Additional

.validate for Codec

Can we add a validatePack method to the codec to replace the validators in toolkit

WitnessArgs.validatePack({...}) // throw an error if the param is invalid

https://github.com/nervosnetwork/lumos/blob/d5e7636736e22c3247d36b4160213e74ed23dd0e/packages/toolkit/src/validators.js#L378-L398

Unpack

For unpack, is there a better way to tell me what went wrong when unpack goes wrong?

Fee calculator based on fee rate

Right now, payFee method provided by lumos only allows one to enter a fee directly. Ideally, we should be able to calculate fee based on each transaction. Specifically, we can achieve this in the following path:

  1. Generate a transaction from the TransactionSkeleton
  2. Serialize the transaction to molecule format, and obtain its size
  3. Based on a fee rate and the transaction size, we can generate the fee to pay
  4. Now invoke payFee to add the fee to pay

Notice payFee might need to insert yet another input cell, creating a larger transaction than the original, this means we will probably have to compensate for the added part in step 2.

Build process on windows produce an error

I build lumos on Windows system. But when I run yarn build on project root. It produce an error

lerna ERR! yarn run build:js exited 1 in '@ckb-lumos/base'
lerna ERR! yarn run build:js stdout:
$ npm run build:old && babel --root-mode upward src/blockchain.ts --out-file lib/blockchain.js --source-maps

> @ckb-lumos/[email protected] build:old
> mkdir -p lib && cp -rp src/*.d.ts lib && cp -rp src/*.js lib

info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
lerna ERR! yarn run build:js stderr:
npm WARN config global `--global`, `--local` are deprecated. Use `--location=global` instead.
��Ŀ¼���ļ� lib �Ѿ����ڡ�
����: lib ʱ������
npm ERR! Lifecycle script `build:old` failed with error:
npm ERR! Error: command failed
npm ERR!   in workspace: @ckb-lumos/[email protected]
npm ERR!   at location: C:\Users\28013\Documents\GitHub\lumos-fork\packages\base
error Command failed with exit code 1.
lerna ERR! yarn run build:js exited 1 in '@ckb-lumos/base'
lerna WARN complete Waiting for 5 child processes to exit. CTRL-C to exit immediately.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: "build:js" exited with 1.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I find the root cause is some unix commands(like cp, mkdir) does not have same CLI interface on Windows.

I will start a new PR for fixing it.

Uncaught error in transactionCollector#collect()

The cellCollector works great and fast. Awesome works!

I am trying to test the TransactionCollector to get all the transactions with a lock script. However, it throws an error as below:

(node:81064) UnhandledPromiseRejectionWarning: ReferenceError: RPC is not defined
    at new TransactionCollector (/Users/kata/repos/dev/nervos/test-lumos/node_modules/@ckb-lumos/indexer/lib/index.js:204:20)
    at /Users/kata/repos/dev/nervos/test-lumos/index.js:35:34
(node:81064) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:81064) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

I guess there is a compatibility issue in the lumos calls to the ckb-indexer.

Here is the test code:
https://github.com/katat/lumos-performance-test/blob/2afa9f043e37865f97c3e042933076b1d707aae1/index.js#L41

Native indexer not compatible with Electron's NODE_MODULE_VERSION

With @ckb-lumos/indexer imported, the Electron throws the following error:

[22775:0604/163717.522194:INFO:CONSOLE(140)] "Uncaught Error: The module '/Users/kata/repos/dev/nervos/neuron/node_modules/@ckb-lumos/indexer/native/index.node'
was compiled against a different Node.js version using
NODE_MODULE_VERSION 72. This version of Node.js requires
NODE_MODULE_VERSION 75. Please try re-compiling or re-installing
the module (for instance, using `npm rebuild` or `npm install`).", source: electron/js2c/asar.js (140)

I tried electron-rebuild to see if it can rebuild the native module indexer to the compatible version NODE_MODULE_VERSION 75 used by our current Electron version, but it seems the electron-rebuild is not able to recognize the indexer module and rebuild it.

For the other native modules, such as sqlite/leveldb/node-sass etc, electron-rebuild can automatically rebuild them when the command electron-rebuild is executed. So I guess the indexer native module is missing a standard configuration in the node world for the electron-rebuild to recognize it and rebuild it.

Do you know any ways around this?

How lumos generate immature transaction?

A immature transaction is a transaction its inputs/cell_deps have immature cells or header_deps have immature headers.

As I understand currently design of lumos can't generate immature transaction. For case like payment channel, one side may need to generate a transaction depend on an immature header, then send it to other side to partially sign the transaction, it would be inconvenient to do this in lumos.

Is there any plan to support this feature?

Custom indexer powered by animagus

Due to schema design, the indexer included in lumos can only provide certain kinds of queries. While this is enough for most dapps, there might always be different dapps that require different indexing solution.

Custom indexer solves this problem, by leveraging the AST model designed in animagus, we can enable developers to index any kind of data in CKB as they wish. This will greatly improve the flexibility of lumos.

Hanging for transactionCollector#collect to finish all the yields

I am having a test to query all the transactions with the lock args 0xdde7801c073dfb3464c7b1f05b806bb2bbb84e99 using the transactionCollector.collect() generator, but it seems to have to take a very long time to finish the query. I started the process 10+ mins ago, and still not finish yet.

Meanwhile, I noticed the CKB node seems to be busy handling the requests initiated from lumos as it is utilizing 100% for one core of the CPU.

Any ideas on what is happening behind the scene?

Code: https://github.com/katat/lumos-performance-test/blob/2afa9f043e37865f97c3e042933076b1d707aae1/index.js#L41

Supports querying on live cells with type hash

At the moment, to transact with a UDT, the users have to enter the typeScript.args in the UI and typeScript.codeHash is hardcoded to targeting the sUDT. In the future, both the args and codeHash would be dynamic variables, so we shouldn't hardcode them at that stage. On the other hand, requiring the users to enter both the variables are cumbersome and error-prone. Ideally, the users should only need to enter a unique identifier to locate the UDT they want to operate on.

Therefore, in the use case of UDT in Neuron, we need to be able to locate a type script with a type hash, which is the identifier used on the Explorer sUDT page. Being able to locate the cells using a type hash is not only more friendly to the users, but also its user experience is more consistent with the identifier used in the Explorer.

Thus, I am wondering if it is possible to have lumos indexer supports querying on type hash since it will be very useful for both Neuron and the 3rd party clients.

As this may involve the change to the ckb-indexer, I guess @quake may help in providing some guidelines here as well.

Using common-scripts to deposit in DAO

Hello! I am currently developing a nervos wallet and I found what I think is an issue related to the creation in transactions.
I found this issue trying to deposit in DAO from the address ckt1qyqdlj9dyxgh3gc8lughae8zlemqe5vyzznsnmhy7w which owns some tokens and already has money deposited in the DAO.

The first thing i wanted to ask is if this should be the correct way of doing. What I mean is if it is correct to use the same address to store the CKB and the assets as if I had used another one this would no happen. In the case I should do differently I would like to know how but in cas all of this should work anyway I will explain the issue below:

When using common-scripts deposit function the injected capacity looks like this:

"inputs": [
    {
      "cell_output": {
        "capacity": "0x174876e800",
        "lock": { ... },
        "type": {
          "args": "0x",
          "code_hash": "0x82d76d1b75fe2fd9a27dfbaa65a039221a380d76c926f378d3f81cf3e7e13f2e",
          "hash_type": "type"
        }
      },
      "data": "0x785a440000000000",
      "out_point": {
        "index": "0x0",
        "tx_hash": "0x805168dafc0c10ae31de2580541db0f5ee8ff53afb55e39a5e2eeb60f878553f"
      },
      "block_number": "0x44a5f1"
    },
    {
      "cell_output": {
        "capacity": "0x746a528800",
        "lock": { ... },
        "type": {
          "args": "0x",
          "code_hash": "0x82d76d1b75fe2fd9a27dfbaa65a039221a380d76c926f378d3f81cf3e7e13f2e",
          "hash_type": "type"
        }
      },
      "data": "0x0000000000000000",
      "out_point": {
        "index": "0x0",
        "tx_hash": "0x6d22619e2866924f585b440543927bb4d21b8bdfac6e415fa156fc66f6a97af0"
      },
      "block_number": "0x44d525"
    },
    {
      "cell_output": {
        "capacity": "0xba43b7400",
        "lock": { ... },
        "type": {
          "args": "0x",
          "code_hash": "0x82d76d1b75fe2fd9a27dfbaa65a039221a380d76c926f378d3f81cf3e7e13f2e",
          "hash_type": "type"
        }
      },
      "data": "0x0000000000000000",
      "out_point": {
        "index": "0x0",
        "tx_hash": "0x87c3586cc91ec7a1a97407456dba0adb34c5781c6031cb71bfb175939af4e6c6"
      },
      "block_number": "0x44f48a"
    },
    {
      "cell_output": {
        "capacity": "0xe8d4a51000",
        "lock": { ... },
        "type": null
      },
      "data": "0x",
      "out_point": {
        "index": "0x0",
        "tx_hash": "0x62e2b992ca219b27e7c508e65c0bacd92403a467a213cbc1d148d09cd8a553ae"
      },
      "block_number": "0x44f639"
    }
  ]

As you can see the first three inputs injected have a type and are actually of the type of DAO. This means these are cells that have either deposited or withdraw ckbs and, of course, can not be used to deposit in the DAO.
In my case, as the type of lock is secp256k1_blake160 it uses its injectCapacity function to inject it. There the transferCompatible function collects the cells to set the input.
I've seen that changing this part of the code (secp256k1_blake160.ts):

422    const cellCollector = cellProvider.collector({
423      lock: fromScript,
424    });

to:

422    const cellCollector = cellProvider.collector({
423      lock: fromScript,
424      type: "empty",
425    });

makes it work correctly. What I am not sure is if this would break other part of the code. Maybe this could be parametrized depending who calls it?
Having type "empty" ensures the capacity is from free cells and not from other like dao, sudt or nft types which may o may not be correct.
Is it possible am I doing something wrong and I could achive this in another way?

Thanks in advance for your time!!

Failed to install lumos indexer

Hi there,

I am trying to install the npm package @ckb-lumos/indexer with the following command:

npm i @ckb-lumos/indexer

But I got the following error:

$ npm i @ckb-lumos/indexer

> [email protected] install /Users/kata/repos/dev/nervos/test-lumos/node_modules/xxhash
> node-gyp rebuild

  CXX(target) Release/obj.target/hash/src/hash.o
In file included from ../src/hash.cc:1:
In file included from ../../nan/nan.h:55:
In file included from /Users/kata/Library/Caches/node-gyp/12.16.2/include/node/uv.h:66:
In file included from /Users/kata/Library/Caches/node-gyp/12.16.2/include/node/uv/unix.h:30:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/socket.h:78:10: fatal error:
      'net/net_kev.h' file not found
#include <net/net_kev.h>
         ^~~~~~~~~~~~~~~
1 error generated.
make: *** [Release/obj.target/hash/src/hash.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/Users/kata/.nvm/versions/node/v12.16.2/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (events.js:310:20)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
gyp ERR! System Darwin 19.4.0
gyp ERR! command "/Users/kata/.nvm/versions/node/v12.16.2/bin/node" "/Users/kata/.nvm/versions/node/v12.16.2/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/kata/repos/dev/nervos/test-lumos/node_modules/xxhash
gyp ERR! node -v v12.16.2
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
npm WARN [email protected] No description
npm WARN [email protected] No repository field.

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/kata/.npm/_logs/2020-05-22T23_49_05_390Z-debug.log

My development environment:

  • macOS 10.15.4
  • nodejs v12.16.2
  • xcodebuild 11.4.1

Do you know the possible causes of this error?

Thanks

Potential performance issue of codec

When I tried to remove the core.js in package/base, the SerializeXXX functions will be implemented by package/codec, but there might be a potential performance issue when serialize big Block in https://github.com/nervosnetwork/lumos/blob/d91a96f970/packages/ckb-indexer/tests/blocks_data.json, it costs about 2 minutes to pack this sample data on my machine and 3 minutes github CI.

This issue may cause e2e tests in package/ckb-indexer to take about 5 minites to run.

env:

  • os: macos Monterey 12.3.1
  • cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
  • memory: 16 GB

Templates and CLI generator

When lumos becomes more mature, we might want to provide different templates for different use cases, hopefully we can also organize them into a CLI generator that can ease bootstrapping work.

[Help wanted ] Installation failed

Trying to install lumos but got error, maybe some dependencies are missing ?

yarn install v1.22.11
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[4/5] Linking dependencies...
[5/5] Building fresh packages...
[-/4] ⠠ waiting...
[-/4] ⠠ waiting...
[4/4] ⠠ @ckb-lumos/indexer
error /root/test/lumos/node_modules/@ckb-lumos/indexer: Command failed.
Exit code: 1
Command: node scripts/install_binary.js || npm run build-release
Arguments:
Directory: /root/test/lumos/node_modules/@ckb-lumos/indexer
Output:
node-pre-gyp args: install --fallback-to-build=false
node-pre-gyp info it worked if it ends with ok
node-pre-gyp info using [email protected]
node-pre-gyp info using [email protected] | linux | x64
node-pre-gyp WARN Using request for node-pre-gyp https download
node-pre-gyp info check checked for "/root/test/lumos/packages/indexer/native/index.node" (not found)
node-pre-gyp http GET https://github.com/nervosnetwork/lumos/releases/download/v0.16.0/lumos-indexer-node-v93-linux-x64.tar.gz
node-pre-gyp http 404 https://github.com/nervosnetwork/lumos/releases/download/v0.16.0/lumos-indexer-node-v93-linux-x64.tar.gz
node-pre-gyp ERR! install error
node-pre-gyp ERR! stack Error: 404 status code downloading tarball https://github.com/nervosnetwork/lumos/releases/download/v0.16.0/lumos-indexer-node-v93-linux-x64.tar.gz
node-pre-gyp ERR! stack     at Request.<anonymous> (/root/test/lumos/node_modules/node-pre-gyp/lib/install.js:142:27)
node-pre-gyp ERR! stack     at Request.emit (node:events:406:35)
node-pre-gyp ERR! stack     at Request.onRequestResponse (/root/test/lumos/node_modules/request/request.js:1059:10)
node-pre-gyp ERR! stack     at ClientRequest.emit (node:events:394:28)
node-pre-gyp ERR! stack     at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:621:27)
node-pre-gyp ERR! stack     at HTTPParser.parserOnHeadersComplete (node:_http_common:128:17)
node-pre-gyp ERR! stack     at TLSSocket.socketOnData (node:_http_client:487:22)
node-pre-gyp ERR! stack     at TLSSocket.emit (node:events:394:28)
node-pre-gyp ERR! stack     at addChunk (node:internal/streams/readable:312:12)
node-pre-gyp ERR! stack     at readableAddChunk (node:internal/streams/readable:287:9)
node-pre-gyp ERR! System Linux 4.15.0-88-generic
node-pre-gyp ERR! command "/usr/bin/node" "/root/test/lumos/node_modules/@ckb-lumos/indexer/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build=false"
node-pre-gyp ERR! cwd /root/test/lumos/packages/indexer
node-pre-gyp ERR! node -v v16.6.0
node-pre-gyp ERR! node-pre-gyp -v v0.14.0
node-pre-gyp ERR! not ok
404 status code downloading tarball https://github.com/nervosnetwork/lumos/releases/download/v0.16.0/lumos-indexer-node-v93-linux-x64.tar.gz
Exec error, code: 1, stderr:

> @ckb-lumos/[email protected] build-release
> neon build --release

neon ERR! spawn cargo ENOENT

Error: spawn cargo ENOENT
    at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.