aurora-is-near / aurora-engine Goto Github PK
View Code? Open in Web Editor NEW⚙️ Aurora Engine implements an Ethereum Virtual Machine (EVM) on the NEAR Protocol.
Home Page: https://doc.aurora.dev/develop/compat/evm
⚙️ Aurora Engine implements an Ethereum Virtual Machine (EVM) on the NEAR Protocol.
Home Page: https://doc.aurora.dev/develop/compat/evm
Currently, prover.rs
file contains EIP-712 implementation that was previously used as a design approach for exits from Aurora. It's not used anywhere anymore so it makes sense to remove it.
Or do we need it for any future implementations?
Getting the following error when trying to run make check
on a MacBook Air (M1, 2020)
error[E0425]: cannot find function `get_fault_info` in this scope
--> /Users/rootling/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-runtime-core-near-0.17.1/src/fault.rs:289:21
|
289 | let fault = get_fault_info(siginfo as _, ucontext);
| ^^^^^^^^^^^^^^ not found in this scope
Compiling wasmer-compiler v1.0.2
Compiling near-vm-errors v3.0.0
Compiling cranelift-frontend v0.67.0
error: aborting due to previous error
For more information about this error, try `rustc --explain E0425`.
error: could not compile `wasmer-runtime-core-near`
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
The engine implementation is sparse and filled with quite a bit of dummy data. This may take a bit of research to find the correct or similar methods on NEAR.
See impl evm::backend::Backend for Engine {
in engine.rs
We want to automate deployment to testnet, however we also do not want to accidentally break anything. Therefore, we need strict CI tests which will ensure merging the new code will not break existing features.
Suggested workflow:
staging.aurora
account on Nearstaging.aurora
(and initialize the contract)This CI suite will likely take a long time to run, so I don't think it should trigger on every push. It also has a stateful impact on testnet, so multiple of these runs cannot be done in parallel. Maybe we should forbid PRs directly targeting develop
and instead let people merge into another branch (e.g. staging
) which automatically tries to deploy to testnet once per day, and if it is successful then merges to develop
?
Suggestions for the transactions performed, and how to setup the triggering of these tests are welcome.
#53 introduces a particular ERC20 that will be used with bridged tokens. We are already measuring the gas usage of ERC20 tokens in the benchmarks, however they currently use a generic ERC20. To get more precise gas measurements for ERC20 transfers the benchmark should be updated to use the actual ERC20 we are using in the EVM.
In order to successfully replay transactions from the Görli testnet we need balance transfers.
It's easy to accidentally mess up amounts in EVM, even for experts like ourselves (e.g. #57 (comment) ). Also, U256
represents lots of different (not interchangeable) quantities: account nonces, account balances (in Wei), ERC20 token balances (of various sorts!), etc.
To make sure we get account balances right we should have a newtype to represent amounts in Wei, and ensure all function arguments that represent balances use this type. It should also include convenience functions for unit conversion.
E.g.
pub struct Wei(U256);
impl Wei {
const ETH_TO_WEI: U256 = U256::from(1_000_000_000_000_000_000);
pub fn new(amount: U256) -> Self {
Self(amount)
}
pub fn from_eth(amount: U256) -> Option<Self> {
amount.checked_mul(Self::ETH_TO_WEI).map(Self)
}
}
To prevent regressions and assure performance. They are pretty long running and need large test data sets, so probably we need something else for that than GitHub workflows. Details TBD.
Currently the ecpair
precompile cannot be executed because it uses up more than 200 Tgas by itself. The purpose of this issue is to get this number down so that it becomes usable.
See #41 for information about measuring gas usage in pre-compiles.
Strings can easily hog up a lot of the bytecode that is produced in the WASM file. There are a lot of errors that could use the same error kinds.
A probable solution would be to define library-wide errors in an error.rs
file. Then errors that have to be related to other parts of the code directly should be placed at the top of that file to keep them together with that part of the code and re-exported to the error.rs
file. This also has the additional benefit of providing simple reference material to look up all errors in the library in one module.
This is related in part to #7, as this will help reduce contract code.
Currently, a bunch of our code is quite difficult to test due to the requirement of the SDK in many of the functions. Obviously, not impossible. After going through much of the code, I have determined that the SDK logic, as much as possible, should be separated from the Engine
.
A possible solution would be to have an SDK layer wrap the functional code. If it is not possible for some of the methods, then keep them in the SDK layer.
Since this would be a rather large refactor, it would be a wise idea to do this on a separate branch and cherry-pick and adapt all new commits as they come in until it is caught up. Upon then, it should be reviewed by all and then merged in.
Transaction Id 2YteBRDGcLvKAN7eVF2cgk4YD1WiCkwjByPYLb7rLkRQ
evm-bully: error: failure:
{
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: ERR_STACK_UNDERFLOW"
}
}
}
}
See replay failing transactions on how to replay ropsten-block-11-tx-3.tar.gz.
See also #2.
Please fix the failing test_meta_parsing
test case added in #6:
https://github.com/aurora-is-near/aurora-engine/runs/2214184828
Precompiles should be feature flag protected:
This will require to have lots of dependencies, should check if they support no_std. If they are not - will need to stomach std for now.
We want a GitHub action which triggers on merge to develop
which will deploy the new version of the contract to @develop.aurora
on TestNet.
Anything else I've missed? Comments welcome.
Presently we allow any gas limit to be specified on a transaction passed to submit
. This is incorrect behaviour relative to Ethereum where there is a notion of "intrinsic gas".
The current behaviour is causing a problem because (a) it is an incompatibility with Ethereum, and we are always pushing for maximum compatibility; and (b) the relayer has trouble handling transactions with 0 gas limit (and these would be invalid transactions if we had the correct logic in the Engine).
This issue is completed when the following is done:
submit
have a gas limit greater than or equal to the intrinsic gassubmit
(i.e. fail with OOG when computation exceeds limit)References:
This is an issue meant to keep track of the documentation of the whole library until the first release on the main network.
Basic checklist:
?
, not try!
, not unwrap
(C-QUESTION-MARK)"https://docs.rs/CRATE/X.Y.Z"
(C-HTML-ROOT)DO NOTE that the checklist above may or may not be applicable as this library is meant to be a WASM binary and not exactly like a library, though it could certainly be published as such in the future.
It is hard to tell from this tuple: type PrecompileResult = Result<(ExitSucceed, Vec<u8>, u64), ExitError>;
what does the last value (u64
) means. I'm assuming that second value Vec<u8>
is the output.
Could you clarify what does the last value means?
Also consider changing this tuple to a named struct and/or add some comments documenting this.
For the evm-bully we need a begin_chain
function.
The function should allow to set the account balances in the genesis state and should only be available in a testing version of the contract.
as suggest way to produce exactly same wasm file by near-sdk-rs: https://github.com/near/near-sdk-rs/tree/master/contact-builder
Today, I was helping joshuajbouw to debug an issue, and I've spend some extra time suffering from "you need make release
before cargo test
papercut". I spend some time initially setting up the things, and then debugging the fact that my changes to the contract were not picked up by the tests (as I was running cargo test
and not make release && cargo test
).
This could be avoided if the tests just build the contcrat themselves, a-la
fn get_contract() -> PathBut {
static ONCE = std::sync::Once::new();
ONCE.call_once(|| {
std::process::Command::new("cargo")
.env("RUSTFLAGS", "--link-arg=-s")
.args("build --target wasm32-unknown-unknown --release").status().unwrap()
"./target/release.wasm".to_string()
})
}
Tracking issue for Berlin HF support.
As discussed on the 2021-03-26 weekly update.
Currently, we have ExpectUtf8
, SdkUnwrap
, SdkExpect
defined in src/lib.rs
and src/types.rs
files.
It makes sense to move this instead to some specific module or even to src/prelude.rs
(I'm not sure that prelude
is a good place for it though as this will require importing sdk
module).
Transaction Id 361165UkrpXsBPxFuvqG1UA97camS1CZYZc6mb7KjPSE
evm-bully: error: failure:
{
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: ERR_OUT_OF_FUND"
}
}
}
}
See replay failing transactions on how to replay rinkeby-block-55-tx-0.tar.gz
This modifies functions like
deposit(receiver, amount)
andwithdraw(receiver, amount)
to only be allowed to be called from specific contract. This requires a bit more specification though.Otherwise make sure that interface compliant with NEP-141 standard.
Per the discussion on PR #87 we would like to not need to get_generation
for each call to storage
in the EVM backend.
This issue is to for the following:
storage
callsIn improving the performance it might also be worthwhile to try and remove the generation
argument from get_storage
and set_storage
since the following invariant must hold: generation == get_generation(address)
(where address
is also an argument to get/set_storage
), but the current type signatures do not enforce that invariant.
CI takes a long time to run presently. The majority of this time is spent downloading and compiling rust crates. If we cache cargo artifacts then we should be able to greatly improve the CI performance. The purpose of this issue is to investigate using caching in GitHub actions.
As the same with upstream SputnikVM, we should also implement the whole suite of tests.
Upstream only tests for general state changes. We would need to test for more than just that as we have a more complete implementation with precompiles and others. While we do feel confident about our implementation and upstream does take care of the bulk of the main tests, it still would be great to ensure full compatibility as the EVM updates.
This requires a design proposal, as this will be a new library to be made in the discussions.
To keep the library compartmentalised we should aim to start making some sub-crates. This is quite standard practice in Rust. Right now, we just have a gigantic library with everything in it. Not absolutely required right now, but will be good to tackle it when possible. The longer this task gets pushed to the side, the longer it'll take.
A typical solution would look like having all the precompiles in their own crate for example.
Simply, as the used gas is now correctly returned in the precompiles, there should be tests to ensure that the gas tests will always be correct.
Of course, a standard solution would be to simply test that single output. This is static for some methods, but others are a bit dynamic.
For the evm-bully we need a begin_block
function.
The begin_block
function should allow to set the necessary block context the evm-bully
needs to replay transactions successfully (see aurora-is-near/evm-bully#3).
It should only be available in a testing version of the contract.
Our project has a dependency that does not compile on latest nightly
$ cargo +nightly check
Compiling logos-derive v0.7.7
error[E0061]: this function takes 1 argument but 2 arguments were supplied
--> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:55:20
|
55 | extras.insert(util::ident(&ext), |_| panic!("Only one #[extras] attribute can be declared."));
| ^^^^^^ ----------------- ----------------------------------------------------------- supplied 2 arguments
| |
| expected 1 argument
|
note: associated function defined here
error[E0061]: this function takes 1 argument but 2 arguments were supplied
--> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:89:23
|
89 | error.insert(variant, |_| panic!("Only one #[error] variant can be declared."));
| ^^^^^^ ------- -------------------------------------------------------- supplied 2 arguments
| |
| expected 1 argument
|
note: associated function defined here
error[E0061]: this function takes 1 argument but 2 arguments were supplied
--> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:93:21
|
93 | end.insert(variant, |_| panic!("Only one #[end] variant can be declared."));
| ^^^^^^ ------- ------------------------------------------------------ supplied 2 arguments
| |
| expected 1 argument
|
note: associated function defined here
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0061`.
error: could not compile `logos-derive`
To learn more, run the command again with --verbose.
This logos-derive
dependency comes from our dependency on lunarity
. Fortunately, we are already working off a fork which we control (it's on Illia's GitHub), and the latest version of logos
(v0.12) does compile with latest nightly. So we can resolve this issue by upgrading logos
to the latest version in our fork of the lunarity
project.
Unfortunately, this upgrade is non-trivial because there were several breaking API changes in logos
, so some code in lunarity
will need to be refactored to work with the latest version. This issue represents the work to do that refactoring and thus have a version of lunarity
that compiles in latest nightly for our project to depend on.
This is probably a low priority item since it will likely not be blocking us until we decide to upgrade our rust-toolchain
file to a later version of rust.
Presently we can only test contract code via integration tests. Integration tests are obviously important, but I do not think they are a replacement for unit tests. Unit testing encourages good isolation of logic and components so that they can be tested independently. Unit tests are also generally simple to understand, and so can function as a form of documentation for on-boarding new developers to the project (they show how each piece is supposed to work). They also make future modifications to code easier to make because breaking a specific unit test will more clearly show what is broken, as opposed to an integration test which might be more opaque.
The reason unit tests are not possible currently is twofold:
sdk
functions generally rely on externals which in production are provided to the wasm environment by the Near runtime. In a unit testing environment we should avoid such calls and instead focus on testing pure functions which do not need the context (storage, block height, timestamps, etc) sdk
calls provide. This may require refactoring some existing code to isolate the pure logic from the sdk
side-effects.The purpose of this issue is to resolve the two above items. It would also be nice to write a couple new unit tests as well, but complete unit test coverage should probably be a separate issue, lest the work for this one become too large.
It would be good to have an integration test that proves we can safely upgrade from current master
to develop
, keeping in tact any existing state. If this test fails then we know we need to write a state migration in develop
. The flow for this test would be something along the lines of
master
near-sdk-sim
and do some transactions (make some accounts, transfer some eth, etc)develop
stage_upgrade
then deploy_upgrade
)As we start getting deeply into adding hard fork support that is beyond precompiles, we need to have library-wide support for hard forks.
A possible solution would either to use markers or an enum. An enum uses at a min. of 4 bytes of data, but a marker from how I understand do not use any amount of extra data at all. There should also possibly be different deployment options for deploying different hard forks. This would make it easier to redeploy on other chains if we ever want to expand beyond ETH.
Cargo recently stabilized a new, opt-in feature resolution algorithm:
https://doc.rust-lang.org/cargo/reference/features.html#feature-resolver-version-2
It subsumes -Z avoid-dev-deps
:
diff --git a/Cargo.toml b/Cargo.toml
index a84a9a2..55db231 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -11,6 +11,7 @@ repository = "https://github.com/aurora-is-near/aurora-engine"
license = "CC0-1.0"
publish = false
autobenches = false
+resolver = "2"
[lib]
crate-type = ["cdylib", "rlib"]
diff --git a/Makefile b/Makefile
index 6f02e46..80b0062 100644
--- a/Makefile
+++ b/Makefile
@@ -14,7 +14,7 @@ release.wasm: target/wasm32-unknown-unknown/release/aurora_engine.wasm
ln -sf $< $@
target/wasm32-unknown-unknown/release/aurora_engine.wasm: Cargo.toml Cargo.lock $(wildcard src/*.rs)
- RUSTFLAGS='-C link-arg=-s' $(CARGO) build --target wasm32-unknown-unknown --release --no-default-features --features=$(FEATURES) -Z avoid-dev-deps
+ RUSTFLAGS='-C link-arg=-s' $(CARGO) build --target wasm32-unknown-unknown --release --no-default-features --features=$(FEATURES)
debug: debug.wasm
@@ -22,7 +22,7 @@ debug.wasm: target/wasm32-unknown-unknown/debug/aurora_engine.wasm
ln -sf $< $@
target/wasm32-unknown-unknown/debug/aurora_engine.wasm: Cargo.toml Cargo.lock $(wildcard src/*.rs)
- $(CARGO) build --target wasm32-unknown-unknown --no-default-features --features=$(FEATURES) -Z avoid-dev-deps
+ $(CARGO) build --target wasm32-unknown-unknown --no-default-features --features=$(FEATURES)
.PHONY: all release debug
In #41 gas usage of the benchmarks is simply reported to the terminal. This makes capturing the data and analyzing it a manual process, which is sub-optimal.
The goal of this issue is to find a better way to capture the gas usage data, and to combine it with the timing data automatically in order to draw conclusions.
One possibility might be to use a custom measurement in criterion
itself.
ecrecover
is now in nearcore
master branch. This means we can run the benchmarks again to see our new expected tps.
{
"Failure": {
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: ERR_INCORRECT_NONCE"
}
}
}
}
}
See replay failing transactions on how to replay goerli-block-12842-tx-0.tar.gz
This error is non-deterministic, seems to be related to state being changed in a timed out transaction. Might also be a nearcore
problem.
See also #2.
During its development, the eth-connector was essentially treated as a separate contract that happens to be contained inside the main engine. A side effect of this is that it has its own initialization function, which is separate from the one for the engine itself.
However, the connector is not a distinct contract, it is just a part of the engine and having to make two calls to fully initialize the engine is sub-optimal, for example this recently caused an issue with the bully.
The purpose of this issue is to make the call to new
alone enough to initialize the contract.
Additionally, there are two prover-related fields: one in the connector and one in the engine state. The latter appears to never be used so I suspect that it could be removed as it is likely a duplicate of the former. This is relevant to this issue because it means the initialization still only needs to take one prover argument.
Note: any change to the contract initialization also needs to be reflected in the aurora.js
package. For example, see aurora-is-near/aurora.js#3
Please address the various code-quality NOTE
and TODO
comments and suggests in @evgenykuzyakov's code review in #74.
To detect if a call was directed to a precompile we are using the following code:
https://github.com/aurora-is-near/aurora-engine/blob/master/src/precompiles/mod.rs#L125
match address.to_low_u64_be() {
1 => Some(ECRecover::run(input, target_gas, context)),
2 => Some(SHA256::run(input, target_gas, context)),
3 => Some(RIPEMD160::run(input, target_gas, context)),
4 => Some(Identity::run(input, target_gas, context)),
5 => Some(ModExp::<Byzantium>::run(input, target_gas, context)),
6 => Some(BN128Add::<Istanbul>::run(input, target_gas, context)),
7 => Some(BN128Mul::<Istanbul>::run(input, target_gas, context)),
8 => Some(BN128Pair::<Istanbul>::run(input, target_gas, context)),
9 => Some(Blake2F::run(input, target_gas, context)),
// Not supported.
_ => None,
}
Notice that we are truncating the address (20 bytes) to the last (8 bytes) and we check if it matches any of the precompiles. I think we should not truncate the address and check directly with the 20 bytes, otherwise we are exposing ourselves to collisions between real deployed addresses and precompiles. 64 bits has still some room so collisions should not happen naturally, but I think it is not unreasonable to believe that it is easy to create an address with last 8 bytes to match the one of our precompiles.
I haven't reason about security implications about it, but let's avoid hitting this issue anyway.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.