Giter Club home page Giter Club logo

oak-foundation / oak-blockchain Goto Github PK

View Code? Open in Web Editor NEW
109.0 22.0 69.0 22.39 MB

OAK(Onchain Autonomous Framework) is a unique blockchain built on Substrate framework with event-driven smart contract VM, autonomous transactions, and on-chain scheduler.

License: GNU General Public License v3.0

Rust 94.98% Shell 3.66% Dockerfile 0.64% Handlebars 0.55% Python 0.09% HTML 0.07%
blockchain substrate polkadot parachain smart-contract-platform oak-blockchain

oak-blockchain's Introduction

oak-web3-open-grant
OAK(Onchain Autonomous Framework) is a unique blockchain built on Substrate framework with an event-driven execution model, autonomous transactions, and on-chain scheduling.

Introduction

OAK, or Onchain Automation Framework, is equipped with a novel smart contract virtual machine that supports an event-driven execution model, enabling developers to build fully autonomous decentralized application. By extending the current set of atomic operations, namely, opcodes of EVM, OAK introduces an innovative way for contracts to interact with each other. Contracts can emit signal events, on which other contracts can listen. Once an event is triggered, corresponding handler functions are automatically executed as a new type of transaction, signal transaction. Applications implemented with this new approach will eliminate dependency of unreliable components such as off-chain relay servers, and in return, significantly simplify execution flow and avoid security risks such as man-in-the-middle attacks.

Benefits of OAK technology,

  • Secure automation
  • Easy integration
  • Built-in private oracle

Live Networks

  • Turing Staging: Rococo parachain (Jan 2022)
  • Turing Network: Kusama parachain (April 2022)
  • OAK Network: Polkadot parachain (target launch Q1 2023)

Documentation

Community

Run the blockchain

Binaries of Turing Network can be found on Releases page of this repo.

For instructions of setting up a Turing Network collator, please refer to our documentation on docs.oak.tech.

Develop

The best way to run our blockchain and debug its code is via building from source. In this chapter we will explain the method to set up a local Rococo network with Turing and other parachains.

Build from source

Ensure you have Rust and the support software (see shell.nix for the latest functional toolchain):

curl https://sh.rustup.rs -sSf | sh
# on Windows download and run rustup-init.exe
# from https://rustup.rs instead

rustup update nightly
rustup target add wasm32-unknown-unknown --toolchain nightly

# [!Important] Make sure the rustup default is compatible with your machine, for example, if you are building using Apple M1 ARM you need to run
rustup install stable-aarch64-apple-darwin
rustup default stable-aarch64-apple-darwin

You will also need to install the following dependencies:

  • Linux: sudo apt install cmake git clang libclang-dev build-essential
  • Mac: brew install cmake git llvm
  • Windows: Download and install the Pre Build Windows binaries of LLVM from http://releases.llvm.org/download.html

Install additional build tools:

cargo +nightly install --git https://github.com/alexcrichton/wasm-gc

Clone OAK-blockchain code from Github:

git clone [email protected]:OAK-Foundation/OAK-blockchain.git    

Then, build the code into binary:

cargo build --release --features turing-node --features dev-queue

In order to make local testing easy, use a dev-queue flag which will allow for putting a task directly on the task queue as opposed to waiting until the next hour to schedule a task. This works when the execution_times passed to schedule a task equals [0].

At this point, the binary of Turing Dev is built and located at ./target/release/oak-collator.

Build the relay chain

Turing Dev is a parachain and doesn’t produce block without a relay chain, so next we will need to clone Polkadot’s code and build a local Rococo(relay chain).

First, find out a compatible version of the relay chain’s code from this repo’s polkadot-parachain reference in ./runtime/turing/Cargo.toml, for example,

polkadot-parachain = { git = "https://github.com/paritytech/polkadot", default-features = false, branch = "release-v0.9.29" }

release-v0.9.29 is the version of relay chain to run, so let’s build its source with the below commands.

git clone --branch release-v0.9.29 https://github.com/paritytech/polkadot
cd polkadot
cargo build --release

Build another parachain

Here we are using a copy of Mangata’s code as an example of another parachain.

First, clone and compile the code with mangata-rococo feature for a parachain.

git clone --branch automation-demo https://github.com/OAK-Foundation/mangata-node

cd mangata-node

cargo build --release --features mangata-rococo

The binary file is located at ./target/release/mangata-node.

Quickstart - run local networks with Zombienet

We have configured a network of 2 relay chain nodes, 1 Turing node and 1 Mangata node in zombienets/turing/mangata.toml, so the easiest way to spin up a local network is through below steps.

  1. Clone and build source of [Zombienet]
    1. git clone https://github.com/paritytech/zombienet.git. It’s recommended to check out a stable release version instead of using the code on master branch. For example, the latest stable version tested is v1.3.63, and you sync to the tip of by calling git fetch --tags && git checkout v1.3.63.
    2. cd zombienet/javascript
    3. Make sure your node version is compatible with that in javascript/package.json, for example "node": ">=16".
    4. npm install
    5. npm run build
  2. After a successful build, you should be able to test run npm run zombie.
  3. Create an alias to the zombie program(on MacOS). Since the actual command of npm run zombie is node ./packages/cli/dist/cli.js, we can add an alias to it by editing the ~/.bash_profile file. Simply, run vim ~/.bash_profile add one line alias zombienet="node <your_absolute_path>/zombienet/javascript/packages/cli/dist/cli.js" to it.
  4. Run source ~/.bash_profile. This will load the new ~/.bash_profile.
  5. Cd into OAK-blockchain folder, cd ../../OAK-blockchain.
  6. Spawn zombienet with our config file, zombienet spawn zombienets/turing/mangata.toml.

Note that if you encounter issue with the above source code build approach and running it on MacOS, try to download the zombienet-macos binary from its Release page and run ./zombienet-macos spawn zombienets/turing/mangata.toml.

The zombie spawn will run 2 relay chain nodes, 1 Turing node and 1 Mangata node, and set up an HRMP channel between the parachains.

Slo-mo - manually run local networks

In this section we will walk through the steps of manually running a local network with a Rococo relay chain, a Turing parachain and a Mangata parachain.

1. Launch a Rococo relay chain

First, navigate to under polkadot repo’s local folder.

With the binary built in ./target/release/polkadot, open up two terminal windows to run two nodes separately.

  1. Run node Alice in the first terminal window
    ./target/release/polkadot \
    --alice \
    --validator \
    --tmp \
    --chain ../OAK-blockchain/resources/rococo-local.json \
    --port 30333 \
    --ws-port 9944
  2. Run node Bob in the second terminal window
    # Bob (In a separate terminal)
    ./target/release/polkadot \
    --bob \
    --validator \
    --tmp \
    --chain ../OAK-blockchain/resources/rococo-local.json \
    --port 30334 \
    --ws-port 9945

At this point, your local relay chain network should be running. Next, we will launch a Turing Network node and connect it to the relay chain as a parachain.

2. Launch Turing Network as a parachain

Navigate to under OAK-blockchain repo’s local folder. The binary built is located at ./target/release/oak-collator.

Then, prepare two files, genesis-state and genesis-wasm, for parachain registration.

# Generate a genesis state file
./target/release/oak-collator export-genesis-state --chain=turing-dev > genesis-state

# Generate a genesis wasm file
./target/release/oak-collator export-genesis-wasm --chain=turing-dev > genesis-wasm

Third, run the oak-collator binary.

./target/release/oak-collator \
--alice \
--collator \
--force-authoring \
--tmp \
--chain=turing-dev \
--port 40333 \
--ws-port 9946 \
-- \
--execution wasm \
--chain ./resources/rococo-local.json \
--port 30335 \
--ws-port 9977 

After this command you should be able to see the stream output of the node.

Register Turing parachain on Rococo

  1. Navigate to Local relay sudo extrinsic
  2. Register your local parachain on the local relay chain by calling parasSudoWrapper.sudoScheduleParaInitialize (see the screenshot below).
  3. Parameters:
    1. id: 2114
    2. genesisHead: switch on "File upload" and drag in the above generated genesis-state file.
    3. validationCode: switch on "File upload" and drag in the genesis-wasm file.
    4. parachain: Yes. image
  4. Once submitted, you should be able to see the id:2114 from the Parathread tab, and after a short period on the Parachains tab.image
  5. Once Turing is onboarded as a parachain, you should see block number start to increase on Turing explorer.

3. Launch Mangata as a parachain

This step is optional as you can spin up another project or Turing Network as the second parachain, but for testing XCM functionality we use another parachain, Mangata, as an example here.

Navigate to under mangata-node repo’s local folder. The binary built is located at ./target/release/mangata-node.

Second, prepare two files, genesis-state and genesis-wasm, for parachain registration.

# Generate a genesis state file
./target/release/mangata-node export-genesis-state  --chain=mangata-rococo-local-testnet > genesis-state

# Generate a genesis wasm file
./target/release/mangata-node export-genesis-wasm  --chain=mangata-rococo-local-testnet > genesis-wasm

Lastly, start up the build.

./target/release/mangata-node --alice --collator --force-authoring --tmp --chain=mangata-rococo-local-testnet --port 50333 --ws-port 9947 -- --execution wasm  --chain ../OAK-blockchain/resources/rococo-local.json --port 30336 --ws-port 9978

Note that,

  • –chain=mangata-rococo-local-testnet is necessary for the chain config.
  • The relay chain config is the same as that of Turing, as in from the file ../OAK-blockchain/resources/rococo-local.json
  • Port numbers need to be different from those of Turing, otherwise there will be port collisions

Up to this point the Mangata node is up and running, but not producing blocks yet. We will repeat the parachain onboarding process below to connect it to Rococo.

Register Mangata on Rococo

  1. Navigate to Local relay sudo extrinsic
  2. Register Mangata on the local Rococo by calling parasSudoWrapper.sudoScheduleParaInitialize.
  3. Parameters:
    1. id: 2110
    2. genesisHead: switch on "File upload" and drag in the above generated genesis-state file.
    3. validationCode: switch on "File upload" and drag in the genesis-wasm file.
    4. parachain: Yes.

Great, at this point you have completed all steps of setting up local networks of Rococo, Turing and Mangata! In order to test XCM functionality, next refer to our guid on docs.oak.tech.

Contacts

Maintainers: OAK Development Team

If you have any questions, please ask our devs on Discord


OAK blockchain is licensed under the GPLv3.0 by the OAK Network team.

oak-blockchain's People

Contributors

12shivs avatar arrudagates avatar atomaka avatar chrisli30 avatar github-actions[bot] avatar imstar15 avatar irsal avatar johnwhitton avatar joshorndorff avatar juniuszhou avatar justinzhou93 avatar laurareesby avatar nnsw3 avatar nvengal avatar rhuttman avatar riusricardo avatar satoshi-kusumoto avatar siddharthteli12 avatar simonkraus avatar stanly-johnson avatar v9n avatar whalelephant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oak-blockchain's Issues

Separate docker publish operation from the existing release Github Action

Motivation

The current Release Github Action consists of multiple operations. Among them there’s a step for building and publishing an image to Dockerhub. The Docker image is for community collators to download and update their node program. Due to the fact that Dockerhub doesn’t have a Draft status simple to that of Github Release, we might not want users to see the published Docker image before the build is fully tested on Staging and Turing.

Suggested Solution
I suggest to extract the Dockerhub publish operation out of the release Github Action, and create a separate Action for it. That way we can publish a docker image at the very end of the release procedure, after testing is completed.

Give me hash

I need rapo hashQuestion

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

Upgrade node to support remote relay chain setup

Hello,

I tried to run Turing collator with relay-chain-rpc-url flag to specify a remote relay chain RPC to be used instead of the embedded one but the node process ignores it and starts syncing relay chain locally.

Here is the example setup:

ExecStart=/usr/local/bin/polkadot \
  --name Polkadotters \
  --base-path '/var/lib/turing' \
  --execution wasm \
  --wasm-execution Compiled \
  --chain turing \
  --collator \
  --force-authoring \
  --node-key-file /home/polkadot/node_key \
  --trie-cache-size 0 \
  --relay-chain-rpc-url ws://178.170.48.153:9944 \

Probably updating dependencies to 0.9.32 and above should fix this paritytech/cumulus#1585

When a price is set at 100, schedule a task with condition > 100 will trigger immediately

Actual

When the current price = 100, a task scheduled with condition > 100 will trigger immediately.

Conversely, if the same task is created when price = 80, it is not be triggered when price is moved to 100 (which is the correct behavior).

CleanShot 2023-10-06 at 13 54 26

The condition is "gt", 100, shown in the encoded call of task creation:

0xc8051c73686962757961206172746873776170145752535452105553445458b02965000000000000000000000000086774046400000000000000000000000000000003010100411f03000003010100411febe1aee309491d00000000000000000089051200000c720beb3f580f0143f9cb18ae694cddb767161060850025a57a4f72a71bf475010040007c13030000000000000000000000000000000000000000000000000000000000a17e7ba271dc2cc12ba5ecf6d178bf818a6d76eb0000c16ff286230000000000000000000000000000000000000000000000000091037ff36ab5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000005446cff2194e84f79513acf6c8980d6e60747253000000000000000000000000000000000000000000000000018abef7846071c700000000000000000000000000000000000000000000000000000000000000020000000000000000000000007d5d845fd0f763cefc24a1cb1675669c3da62615000000000000000000000000e17d2c5c7761092f31c9eca49db426d5f2699bf000076b14ea3f010e8c0300076b3c552e020e8c13000c720beb3f580f0143f9cb18ae694cddb767161060850025a57a4f72a71bf475

Repro

This problem can be reproduced by https://github.com/OAK-Foundation/automation-price-demo

Problem in RPC Port configuration

In the latest client version v2.1.0 there is a problem in RPC port configuration.

The embedded relaychain always claims port number 9944 - even if you configure a different port for it.
Port 9944 should be available for the parachain as it has ever been.
Instead the parachain is spawning on random ports like 36261 where the RPC is not accessible from the localhost.
The RPC returned the following error while trying to query:
Provided Host header is not whitelisted.

Steps to Reproduce

Start the node client in a minimal setup like the following command to reproduce the issue:

/path/to/oak-collator --collator --chain turing --rpc-port 9944 -- --rpc-port 9945

You will notice that the port configured for the parachain will be assigned to the embedded relaychain instead.

  • Operating system: Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-89-generic x86_64)
  • Template version/tag: oak-collator 2.1.0-f38b3cd002f

Increase scheduler’s max allowed weight to unblock extrinsics via governance

Description

We recently found that council.setMembers() extrinsic couldn’t successfully due to scheduler’spermenantlyOverweight error during governance referendum execution. This issue started to happen after Turing’s v1.9.0 release mostly due to the introduction of proofSize in Weight struct.

The solution is to change the below constant variable so it’s updated during the next release.
image

Turing’s current calculation:
https://github.com/OAK-Foundation/OAK-blockchain/blob/0ebac6996c25d8d721bc34912adba39d5e595b6d/runtime/turing/src/lib.rs#L747

Shiden’s calculation for reference:
https://github.com/AstarNetwork/Astar/blob/c9afb955bbed7afb7158899737505a8c6cc3a2a2/runtime/shiden/src/lib.rs#L173

I think we can just increase the percentage, currently at 10%, to higher to approximately equate to 2,048,000 proofsize.

Steps to Reproduce

https://polkadot.js.org/apps/#/explorer/query/0xa66312c8b29966a47059aa63dc53cdb1f36b674c9b5183723520fd07be1bd4c7
CleanShot 2023-07-21 at 01 23 44

Expected vs. Actual Behavior

A simiple council.setMembers() call should pass and not encounter scheduler.permenantlyOverweight error.

Environment

The issue was found on Turing Staging.

Attribute definition discrepency in automationPrice.tasks v.s. system events

When calling extrinsics and querying system events, I find it’s cumbersome but to have two sets of names. For example, in events of automationPrice, who and taskId are used, but in data storage, it becomes taskId, ownerId.

TaskScheduled Event
{ who, taskId }

automationPrice.tasks
{
ownerId: 6757gffjjMc7E4sZJtkfvq8fmMzH2NPRrEL3f3tqpr2PzXYq
taskId: 1549-0-1
...
}

I think we should define a standard for those names.

  1. The who is not a good name, because both owner and scheduleAs are "who"s. We should just call the creator of the task, "owner".
  2. The "taskId" seems good so far.

A reminder that if we change the "who" to "owner" in events, we will need to update automationTime and data Insights service as well.

[OIP] Standardize Event and Error format for AutoCompoundDelegatorStake

OAK Improvement Proposal

Note: this proposal will be deprecated by #387 since we won’t need to create special Event such as AutoCompoundDelegatorStakeSucceeded any more

Motivation

The goal of this proposal is to create a standard for Event and Error definition, in order to properly organize the Rust code and simplify live debugging process. The current problem is that Event such as TaskScheduled doesn’t carry sufficient information for developers to tell what type is the event and who is it for. Although we can retrieve the full information from the original creation extrinsic with the taskId, the lookup method is clumsy and not intuitive. With this proposal, we aim to unify the format of the Task related events, and let me carry more meaningful information.

Suggested Solution

Extend default Rust struct such as DispatchErrorWithPostInfo struct in Substrate.

For example,
The AutoCompoundDelegatorStake events are defined in automation-time/src/lib.rs. In order to standardize their format I propose changes as below,

Success Events
Name: AutoCompoundDelegatorStakeSucceeded
Attributes:

  • who: the creator of the task
  • taskId: the id of the task
  • collator: the collator the auto-compound is delegated to
  • amount: the amount of compound staked in this execution

Error Events
Name: AutoCompoundDelegatorStakeFailed
Attributes:

  • who: the creator of the task
  • taskId: the id of the task
  • DispatchErrorWithData, in which the struct contains
    • message:
    • data: []

For example, if the failure reason is "not enough wallet balance to pay for execution fees", then message is Unable to pay fees for the next execution and the data vector contains only one item, which is a float number, or string with the value of <fee_amount_to_deduct>. The entire Event will look like below,

Name: AutoCompoundDelegatorStakeFailed
Attributes:

  • who: the creator of the task
  • taskId: the id of the task
  • DispatchErrorWithData
    • message: Unable to pay fees for the next execution
    • data: [<fee_amount_to_deduct>]

Another example would be "no delegation to the specified collator", meaning the wallet doesn’t have an active delegation to the collator specified during task creation. It’s very likely that the wallet has unstaked from that collator. Then, the message should be "Active delegation to collator not found", and the data vector would contain only one item, which is the wallet address of the collator. The entire Event will look like below,

Name: AutoCompoundDelegatorStakeFailed
Attributes:

  • who: the creator of the task
  • taskId: the id of the task
  • DispatchErrorWithData
    • message: Active delegation to collator not found
    • data: [<collator_wallet_address>]

Unit test of missed task queue needs a revamp

Description

The logic of test missed_tasks_removes_completed_tasks is incorrect.

First, we haven't tested the process of a task being missed. Can we change the weight of this block to make the weight limit very low, so a task would be missed?

Second, after the task is missed, it should be added to the missed queue. At this point, we should check the contents of the missed queue.

Lastly, in our code, if the next block has weight capacity, will it take tasks from the missed queue and try to execute them? We should check if taskExecuted is present in the next block. Do you understand? The test case should not directly manipulate the missed task queue; instead, the task queue should be handled by the dev code.

Environment

Unit tests in dev environment.

delegatorState is missing for a specific delegator

parachainStaking.topDelegations for a specific collator, returns a set of delegators
parachainStaking.delegatorState for 1 of these delegators, returns Unknown
This issue happens only for a specific delegator.

block
0x2979563cd3619ef7307005ff7bd1bd0230533d93b56087bcc04ac662edf4bf6c

collator: 684xNMHJrNNqVT26AB19mPApS2FRAMVspkgALq7WgBAvohho

parachainStaking.topDelegations: Option<PalletParachainStakingDelegations>
{
  delegations: [
    {
      owner: 69ZNQdLopee2ZNDL5xkSKu9QpkQq7jJGgiw9tFjDZVtoLRim
      amount: 1,179,800,000,000,000
    }
    {
      owner: 6AMsXyV1CYc3LMTk155JTDGEzbgVPvsX9aXp7VXz9heC3iuP
      amount: 14,500,000,000,000
    }
    {
      owner: 6BDaap5XLPtgkkkWCBXPzaiRG5BL29E7RMnpGJ685Y4hvvq9
      amount: 12,450,000,000,000
    }
    {
      owner: 67AHmcoC8Rp68b5SzDhf5fpzHmRTAcuEctvciPDEyiA57ons
      amount: 12,170,000,000,000
    }
    {
      owner: 69BMX8xZaWevFhDssG3RiqqDyzrx5uQgVo9XztV5VjMNimr8
      amount: 11,190,000,000,000
    }
    {
      owner: 681wSDNULDV3pduPvWgLxKMvGc3pgiwCimp1dRr9qVtSrfk1
      amount: 10,000,000,000,000
    }
    {
      owner: 6Bu7uoHFwGELfVAxtepJa8xgCevVjtsVdK4nLqnDWEJpPm2E
      amount: 5,110,000,000,000
    }
    {
      owner: 66XWAcwQHXccJcFFY2raBGSzM6EVdCFKiJ5HjEYN7uq5CKq9
      amount: 5,000,000,000,000
    }
    {
      owner: 69mcbb4o9DZCe9hQFxC5jcoHRcZb2ffVeJ14tdqTMTZ7W4E2
      amount: 4,200,000,000,000
    }
    {
      owner: 68sdNVNYXGyrtmiahL2C9v8CfyhjBonpPaeSsgLuS9CEzSmL
      amount: 2,601,204,399,877
    }
    {
      owner: 67kgfmY6zpw1PRYpj3D5RtkzVZnVvn49XHGyR4v9MEsRRyet
      amount: 1,200,000,000,000
    }
    {
      owner: 66ZfMmWxtPXpnY5Rmj9LmzNjVN2veVyzdAqLjrvJiqdxqsS1
      amount: 1,180,000,000,000
    }
    {
      owner: 6BagJrpC13DX6NWJYNqvcaPrgAtfm4L3zf2ZJb8LwBd2y1hK
      amount: 1,100,000,000,000
    }
    {
      owner: 69vGFK17eUH8WjbLs8F59J958Ra6tB1gAiaHEk7JL5mtX8r4
      amount: 1,000,000,000,000
    }
    {
      owner: 679zFkKB2TMwVD7kcQhR7NM8oM2sSenJ5ZTWu91CAt3PF8px
      amount: 840,000,000,000
    }
    {
      owner: 69TJrMGnkqqzyLP2XDK86wRrEnUrKUA7FVbvwM5xemr58uHt
      amount: 510,000,000,000
    }
    {
      owner: 68jM33DoSS7QsYfa75oRizbcyTHs28urFCmWQaEkwZ1QSHPd
      amount: 500,000,000,000
    }
    {
      owner: 6B7ACD3jQPuC2TBC55K7QTedy82TgVGP5H72DyRigkXdkieD
      amount: 500,000,000,000
    }
  ]
  total: 1,263,851,204,399,877
}

delegator:
68jM33DoSS7QsYfa75oRizbcyTHs28urFCmWQaEkwZ1QSHPd

parachainStaking.delegatorState: Option<PalletParachainStakingDelegator>
<unknown>

Not connecting to the default bootnodes in chainspec during initial sync

After migrating Turing node, we couldn't sync the chain from scratch due to inability to connect to any peers.

here's the systemd service file we're using:

ExecStart=/usr/local/bin/turing \
  --name Polkadotters \
  --base-path '/var/lib/turing' \
  --wasm-execution Compiled \
  --chain turing \
  --collator \
  --force-authoring \
  --trie-cache-size 0 \
  -- \
  --state-pruning 100 \
  --blocks-pruning 100

Finally we managed to resolve this issue with the help of fellow collator (props to Sik | crifferent.de) who made a bootnode to connect to.

Commit hash?

Question

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

What is the first commit hash that was ever made in the main oak GitHub repo?

The encoded hash in TaskTriggered does not contain the correct payload for XCM tasks

Steps to Reproduce
Take the task 4353492-0-1 on Turing Network as an example.

Run the below query in Insights,

SELECT block_height, id, timestamp, module, method, docs, data, extrinsic_id
FROM turing.events 
where data::text LIKE '%67gFKvqXtE5bx1E98uV6yhR9wosktNQQZyf5HdncHzvCqbUB%' AND module NOT IN ('balances','parachainStaking')
order BY block_height DESC;

You will see the TaskTriggered event, which contains the encodedCall

{'who': '67gFKvqXtE5bx1E98uV6yhR9wosktNQQZyf5HdncHzvCqbUB', 'taskId': '4353492-0-1', 'condition': {'type': 'time', 'timestamp': '1706074800'}, 'encodedCall': '0x1f00c60e71bd0f2e6d8832fea1a2d56091c48493c78801006808338e4789af9cef672b34662a6f74d0486439e4612aa5f4aa8b057f283a81c6de8aaf72cd471300008a5d78456301'}

Expected vs. Actual Behavior

The encodedCall 0x1f00c60e71bd0f2e6d8832fea1a2d56091c48493c78801006808338e4789af9cef672b34662a6f74d0486439e4612aa5f4aa8b057f283a81c6de8aaf72cd471300008a5d78456301 is not a hash that can be decoded by Turing. It is not the direct payload of the Turing task. Instead, it is the XCM payload of task payload. In this case, it is a hash on Moonriver, https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fmoonriver.api.onfinality.io%2Fpublic-ws#/extrinsics/decode

We expect the direct payload of the task in encodedCall, one that can be decoded by Turing. Inside that payload, it contains the information of XCM, indicating the destination of the message. That’s how we will know that the XCM payload is for Moonriver, and then we can use Moonriver adapter to decode the inner encodedCall.

automationPrice.deleteAsset doesn’t seem to delete an asset properly

Steps to Reproduce

  1. Submit an extrinsic to delete an asset
    Encoded call hash: 0xdfb4ba0568a5b12c4abe449f9d7f155bbfea51ed96b923103f98a7926e3cce91

CleanShot 2023-09-29 at 21 23 58

  1. The AssetDelete event shows up, but only with one asset in its details.
    CleanShot 2023-09-29 at 21 24 33

  2. The original asset is still in chain storage
    CleanShot 2023-09-29 at 21 24 53

  3. Although the delete extrinsic succeeded, re-create the same asset will get an "automationPrice.AssetAlreadyInitialized" error during extrinsic submission.
    initAsset Encoded call hash: 0xc8011c73686962757961206172746873776170145752535452105553445412040c720beb3f580f0143f9cb18ae694cddb767161060850025a57a4f72a71bf475

Clean up Github Actions

Vinh, since you separated Action script recently, I’m assigning this item to you. There seems to be several Github Actions script deprecated. Could you check them out in the below screenshot to see if my requirements make sense?
CleanShot 2023-08-08 at 20 14 48

Regarding the renaming, I think "Publish Turing Docker Image" is good enough. In addition, it would be also great to include the docker image name oaknetwork/turing in its title or description.

Hello

Question

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

Should there be a queryFeeDetails RPC endpoint established for automationPrice?

In automationTime, there’s a specific RPC interface for querying fees. Although fee logic in automationPrice is simple at the moment, I think we still need an interface to return the actual fee deducted for schedule + execution. Please check the code of query_fee_details of automationTime to see if a similar interface is required for automationPrice

For example, this is the interface for developers, auto-generated by the chain’s RPC metadata.
https://github.com/OAK-Foundation/oak.js/blob/22ae77e28aa40b9f69158cc750a2c5841dc34d4c/packages/types/src/automationTime.ts#L26C4-L32C7

Each entry in priceRegistry needs a unique id

From developer’s perspective, every data type in on-chain storage needs a unique id, otherwise there’s no way to know whether a Price has been updated. Please refer to the screenshot below, as you can see from the events the client couldn’t tell what server version the data entry is.
priceRegistry-no-unique-id

Price Triggering

A Place holder to keep track of progress and TODO to be fully consider the feature is completed

  • Add benchmark and update weight for automation-price pallet
  • Cleanup TODO
  • Handle delete asset extrinsic
  • Improve error handling
  • Optimize price trigger performance: only scan/check asset that has price actually move in the same directon with the triggering function.
  • General code cleanup
  • Simplify interface API(There will be a goole doc next to discuss those) https://docs.google.com/document/d/1JIKL0BrKAZTKWzeNdoMrtb9OJ6elDmePCpDBaUlVejY/edit

Add relevant event similar to Automation Time

  • TaskCancelled
  • TaskCompleted
  • TaskExecuted
  • TaskExecutionFailed

Sub PR:

#420

The last number of a taskId is different from the eventId of TaskScheduled

Description

Nova wallet tested the on Turing Staging. However, I found that although the taskId is 2181496-2-3, the taskScheduled event Id is 5, different from the last number in taskId 3.

All events regarding task 2181496-2-3, indicating the TaskScheduled eventId is 3.
CleanShot 2023-08-23 at 00 24 26

The block of 2181496, where TaskScheduled eventId is 5.
CleanShot 2023-08-23 at 00 36 55

Action

  1. We need to re-create this block and re-produce this issue using unit test.
  2. We need to review the dev code that determines the eventId of task creation to see if the last number eventId can be accurately calculated.

Hash commit?

Question

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

Clean up assetRegistry defined for Dev chain

Let’s clean up the definitions of tokens in AssetRegistry in dev environment.

Requirements:

  1. Add Relay Chain token to assetRegistry
  2. Remove SDN, RSTR from the array, because they are never used in dev environment, right?

The current values I see from running a dev chain.

[
  [
    [
      5
    ]
    {
      decimals: 18
      name: Moonbase
      symbol: DEV
      existentialDeposit: 1
      location: {
        V3: {
          parents: 1
          interior: {
            X2: [
              {
                Parachain: 1,000
              }
              {
                PalletInstance: 3
              }
            ]
          }
        }
      }
      additional: {
        feePerSecond: 10,000,000,000,000,000,000
        conversionRate: null
      }
    }
  ]
  [
    [
      1
    ]
    {
      decimals: 18
      name: Mangata Rococo
      symbol: MGR
      existentialDeposit: 0
      location: {
        V3: {
          parents: 1
          interior: {
            X2: [
              {
                Parachain: 2,110
              }
              {
                GeneralKey: {
                  length: 4
                  data: 0x0000000000000000000000000000000000000000000000000000000000000000
                }
              }
            ]
          }
        }
      }
      additional: {
        feePerSecond: 416,000,000,000
        conversionRate: null
      }
    }
  ]
  [
    [
      2
    ]
    {
      decimals: 18
      name: Rocstar
      symbol: RSTR
      existentialDeposit: 10,000,000,000,000,000
      location: {
        V3: {
          parents: 1
          interior: {
            X1: {
              Parachain: 2,006
            }
          }
        }
      }
      additional: {
        feePerSecond: 416,000,000,000
        conversionRate: null
      }
    }
  ]
  [
    [
      0
    ]
    {
      decimals: 10
      name: Native
      symbol: TUR
      existentialDeposit: 100,000,000
      location: {
        V3: {
          parents: 0
          interior: Here
        }
      }
      additional: {
        feePerSecond: 416,000,000,000
        conversionRate: null
      }
    }
  ]
  [
    [
      3
    ]
    {
      decimals: 18
      name: Shiden
      symbol: SDN
      existentialDeposit: 10,000,000,000,000,000
      location: {
        V3: {
          parents: 1
          interior: {
            X1: {
              Parachain: 2,007
            }
          }
        }
      }
      additional: {
        feePerSecond: 416,000,000,000
        conversionRate: null
      }
    }
  ]
  [
    [
      4
    ]
    {
      decimals: 18
      name: Shibuya
      symbol: SBY
      existentialDeposit: 10,000,000,000,000,000
      location: {
        V3: {
          parents: 1
          interior: {
            X1: {
              Parachain: 2,000
            }
          }
        }
      }
      additional: {
        feePerSecond: 416,000,000,000
        conversionRate: null
      }
    }
  ]
]

Expired tasks stay in tasks queue and not get cleaned

The expiredAt value works well, as tasks are not executed once the block time passes it. However, expired tasks stay in tasks result and are not get cleaned up. It’s not a high-pri but we definitely need to have means to clean them.

[OIP] Allow users to decide the situations to retry a Recurring task

(This proposal is being implemented in #383 )

Motivation

First of all, it's important to note that the discussion in this issue applies to only the Recurring Schedule of timeAutomation, but not the Fixed Schedule. However, the concept can later be extended to apply to priceAutomation as well.

Currently, the logic that determines whether a task should be retried is defined in our code. To enhance scalability and delegate design decisions to developers, I propose adding an array called abortErrors to the task creation extrinsics.

By default, when the abortErrors array is empty, a Recurring task will continue retrying upon encountering all errors. For instance, if a parachainStaking.delegatorBondMore fails due to "InsufficientBalance," it indicates that staking has not generated enough reward to compound yet. In this scenario, the task will fire a TaskRescheduled event and retry at the next scheduled time, such as every day. On the other hand, if a user decides to add the string "InsufficientBalance" to the abortErrors array, the task will instead fire a TaskNotRescheduled event and cease to occur altogether.

The array of abortErrors should look like,
abortErrors: ["DelegationFailed", "Insufficient Balance"]

It's essential to consider that errors have priorities, meaning some errors take precedence over others and will terminate the task execution, subsequently blocking other errors from firing.

For instance, suppose the abortErrors parameter includes only a "DelegationFailed" error. In that case, the task should not reschedule when delegation execution fails due to this specific error. However, there might be other execution failures, such as "InsufficientBalance," which occur earlier in the code and are not listed in the abortErrors array. Consequently, the task will continue to reschedule until a "DelegationFailed" error is fired, as it takes precedence in the code flow.

This behavior is normal in the code. As we progress, we will gain more clarity on different situations and provide guidelines to dapp developers on setting the abortErrors parameters optimally, ensuring smooth task executions based on their specific requirements.

By empowering dapp developers to specify errors in the abortErrors array, we offer significant flexibility in determining the conditions that should lead to task abortion. This approach allows for more customization and control over task behavior based on specific requirements.

Additional Information

There are a few things we plan to improve in the future, including:

  1. Providing dapp developers with best practices for defining the abortErrors for different type of tasks.
  2. Adding special handling logic to tasks to abort by default. In the current design, no error will abort a task by default.

Hash commit?

Question

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

Apparently wrong version downloads from releases v.2.14

Releases 2.14 binary and source code downloads apparently provide v.2.1.0-e07a5345066.

Steps to Reproduce

wget https://github.com/OAK-Foundation/OAK-blockchain/releases/download/v2.0.0/oak-collator
chmod +755 oak-collator
./oak-collator --version

oak-collator 2.1.0-e07a5345066

Calling system::version() extrinsic through https://polkadot.js.org/apps returns:

2.1.0-e07a5345066

OR

git clone https://github.com/OAK-Foundation/OAK-blockchain.git
cargo build --release --features turing-node --features dev-queue
./target/release/oak-collator --version

oak-collator 2.1.0-e07a5345066

and running Moonbeam EVM smart contract automation demo chokes on Error: The scheduleAs parameter should not be empty, which should not be present in v.2.1.4

Expected vs. Actual Behavior

oak-collator to report as, and operate as, v.2.1.4
vs
reports as oak-collator 2.1.0-e07a5345066 and does not include merged feature of v.2.1.4

Related issue
Moonbeam demo cannot work with required Turing version

Bag

Description

Tell us what happened. In particular, be specific about any changes you made to this template.
Ideally, provide a link to your project's GitHub repository. Please note that we are not able to
support all conceivable changes to this template project, but the more information you are able to
provide the more equipped we will be to help.

Steps to Reproduce

Replace the example steps below with actual steps to reproduce the bug you're reporting.

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected vs. Actual Behavior

What did you expect to happen after you followed the steps you described in the last section? What
actually happened?

Environment

Describe the environment in which you encountered this bug. Use the list below as a starting point
and add additional information if you think it's relevant.

  • Operating system:
  • Template version/tag:
  • Rust version (run rustup show):

Logs, Errors or Screenshots

Please provide the text of any logs or errors that you experienced; if
applicable, provide screenshots to help illustrate the problem.

Additional Information

Please add any other details that you think may help us solve your problem.

First hash oak

Question

Please include information such as the following: is your question to clarify an existing resource
or are you asking about something new? what are you trying to accomplish? where have you looked for
answers?

GLIBC not found for Turing node on v2.0.2

Question

Hi there,
I'm from Dwellir, a node provider running a node on the Turing chain, and when I just now tried upgrading our node from v2.0.1 to v2.0.2 I got this error when starting the node client (oak-collator client is renamed polkadot in our infra) after the upgrade:

/home/polkadot/polkadot: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/polkadot/polkadot)
/home/polkadot/polkadot: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/polkadot/polkadot)
/home/polkadot/polkadot: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/polkadot/polkadot)
/home/polkadot/polkadot: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /home/polkadot/polkadot)
/home/polkadot/polkadot: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/polkadot/polkadot)

I couldn't find any info in the latest release notes about any new requirements for this version. My temporary solution was to downgrade to v2.0.1 again, which lets the node run fine, but I do have questions:

  • Is this new requirement an intended change?
  • What should we install to make the v2.0.2 client work?

Cheers,
Jakob

TaskCancelled fails silently upon a no-update error

When an a non-existent taskId, for example undefined, is passed into cancelTask, TaskCancelled currently fails silently, creating confusion on whether the submitted extrinsics executed or not.

It should throw a regular TaskCancelled event indicating a failed cancellation. For example, ProxyExecuted has a good example that when an error happens, it returns the error instead of the actual result.
CleanShot 2023-10-06 at 10 03 39

Block 3288719 decoded incorrectly

Description

Block 3288719 events cannot be correctly decoded. Event 0 is wrongly decoded as 'TaskExecutionFailed' instead of 'TaskTriggered'

Steps to Reproduce

Replace the example steps below with actual steps to reproduce the bug you're reporting.

  1. Go to Polkadot.js apps
  2. Try to query block 3288719 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.turing.oak.tech#/explorer/query/3288719
  3. See events decode error

Expected vs. Actual Behavior

Events decoded without errors

Environment

--

Logs, Errors or Screenshots

See https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.turing.oak.tech#/explorer/query/3288719

Additional Information

Subscan has a correct version of this block https://turing.subscan.io/block/3288719?tab=event

Ensure xcmHandler.xcmSent event is verified in all XCM automation test cases

In tests such as trigger_tasks_completes_some_xcmp_tasks, we didn’t verify the success event of xcmHandler pallet because there’s no mock for that event. In order to fix that, we need to create a mock deposit_event of xcmHandler.xcmSent and verify the firing of that to mimic the scenarios in production.

[OIP] Solve TaskScheduled and TaskRescheduled duplicate problem

OAK Improvement Proposal

Motivation

Looking at the task history on Turing Network, specifically the task with the hash 0xcae095bc2b84249e4ca65671759808e0bb10680fb4fedd1ad9e358e4e084de39 shown in the screenshot below, it is evident that after each execution, both TaskScheduled and TaskRescheduled events contain identical data.

CleanShot 2023-07-20 at 17 57 54

Another issue is that the event of task triggering is intertwined with execution and has no indication of whether to reschedule for the next execution. To address this, I propose the following standard for task events.

For all tasks,
After the creation extrinsic, a TaskScheduled needs to be fired. ✅

During task execution,

  1. A TaskTriggered event should be fired to indicate the successful triggering of a task.
    Event Name: TaskTriggered
    Attributes:

    • taskId:
    • who:
    • condition: { type:time, value: 1689903366 }
    • CancelUponError: whether to cancel a task upon execution error. A map of errorCode and true of false, by default all erros are true, meaning the task will be cancelled upon error.
      { "errorCode": true or false}, for example, {"InsufficientBalance": false}
  2. Next, the actual logic should run and generate an event for its result, such as , parachainStake.delegationIncreased. This event should resemble the ones triggered by manual signing and should not be related to task events. The event can indicate success or failure, marking the execution result of the task. The execution event index should always be equal to the event index of the above TaskTrigger + 1. Currently, events have different names like XcmpTaskSucceeded and SuccessfullyAutoCompoundedDelegatorStake. We do not need to create special events like these; instead, we can rely on the existing events in the underlying pallets.

  3. If the task is a recurring schedule, a TaskRescheduled event should subsequently be fired, indicating that a new task has been paid for and scheduled for the next run. If the user's wallet does not have sufficient fees to pay for the next execution, an error will occur at this step, but it will not stop the execution mentioned above.

These changes offer several benefits:

  1. The firing of a task execution is clearly indicated through TaskTriggered, serving as a notification that the triggering function is functioning properly.
  2. The success or failure of the execution is independent of the triggering notification mentioned above.
  3. The execution logic is separated from the rescheduling process. For a fixed schedule, TaskRescheduled will not fire, while for recurring tasks, TaskRescheduled will fire upon successful events and may or may not fire upon errors. For example, if the AutoCompoundDelegationStake fails due to insufficient balance to pay for the next execution fee, the TaskRescheduled event should still be fired.

Implementing these changes will enhance the clarity and separation of task-related events, providing a more structured and meaningful representation of task execution and rescheduling.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.