keep-starknet-strange / gomu-gomu-no-gatling Goto Github PK
View Code? Open in Web Editor NEWBlazing fast tool to benchmark Starknet sequencers ๐ฆ
License: MIT License
Blazing fast tool to benchmark Starknet sequencers ๐ฆ
License: MIT License
We aim to introduce a new benchmarking tool named "Gomu Gomu No Gatling" for evaluating Starknet sequencers. This tool will primarily focus on calculating the TPS (transactions per second) and potentially other metrics, such as Cairo steps per second, at a later stage.
For the current scope, the operations that we'll be focusing on are:
The configuration file will be structured as follows:
rpc:
url: "https://sharingan.madara.zone"
simulation:
fail_fast: true
setup:
create_accounts:
num_accounts: 10
deployer:
address: "0x0000000000000000000000000000000000000000000000000000000000000001"
signing_key: "0x0"
salt: 1
@d-roak will be following and reviewing this mission.
Steps to reproduce
Set num_erc721_configs
to 0 and run
Error Trace
The application panicked (crashed).
Message: attempt to subtract with overflow
Location: /Users/0xevolve/Documents/GitHub/gomu-gomu-no-gatling/src/metrics.rs:64
gomu-gomu-no-gatling/src/actions/shoot.rs
Line 84 in 88f02ca
We should support cairo1 as the shooting account
Current Behavior
If we want to add benchmarks for a new contract and/or a new benchmarking function it's not clear where we should add it.
The codebases violates the SRP in many places.
Expected Behavior
GatlingShooterSetup
should only be responsible for doing the setup for one contract. deploy_erc20
should be replaced by deploy_contract
.
Here is an example for an ERC20Shooter, from this example it's very clear how one would add an ERC721Shooter or any other shooter for another contract.
trait Shooter {
fn setup(&self);
fn shoot(&self, config: GatlingConfig);
async fn execute(&self, user: &mut GooseUser) -> TransactionResult;
fn get_calldata(&self) -> Vec<FieldElement>;
}
struct ERC20Shooter {
contract_name: String,
contract_address: String,
num_runs: u64,
}
impl Shooter for ERC20Shooter {
fn setup(&self) {
log::info!("{}", format!("Setting up {0} shooter", self.contract_name));
}
fn shoot(&self, config: GatlingConfig) {
log::info!("{}", format!("Shooting {0}", self.contract_name));
}
async fn execute(&self, user: &mut GooseUser) -> TransactionResult {
log::info!("{}", format!("Executing {0}", self.contract_name));
Ok(())
}
fn get_calldata(&self) -> Vec<FieldElement> {
let (amount_low, amount_high) = (felt!("1"), felt!("0"));
vec![VOID_ADDRESS, amount_low, amount_high]
}
}
With such a design we can simplify the whole codebase and get rid of all specific functions for erc20/erc721.
Most of the code for erc20 transfers and erc721 being fairly similar we can abstract the excution logic under the Shooter trait and let the client implement its own logic.
This is a very simple spec, as the refactor progresses we can refine it to our needs. Probably we will want to have all the core similar logic under another trait GooseExecutor
that would be able to operate on any object that implements the Shooter
trait.
pub async fn run_goose<T: Shooter>(shooter: T) -> color_eyre::Result<()> {
// ..
let transaction: TransactionFunction = Arc::new(move |user| Box::pin(shooter.execute(user)));
// ..
Ok(())
}
Currently the num_accounts
is the number of accounts that are receiving transactions. It's counterintuitive, looking at first sight, we expect it to be the number of accounts that are sending transactions.
gomu-gomu-no-gatling/config/default.yaml
Line 15 in 88f02ca
After the merge of #3
The only functional configs are default.yaml
and 2.1.0.yaml
.
are all outdated and still contain the simulation config.
The readme is suggesting to run gatling shoot -c config/rinnegan.yaml
which evidently fails.
Running a benchmark on a totally new Madara instance results in some rejected transactions on both ERC20 and ERC721 benchmarks. We should investigate why does transactions are being rejected
Current Behavior
ERC721 mints are performed by a single account as the mint
function can only be called by the owner.
Expected Behavior
mint
can be called by anyone. Multi-account mint should then be re-added.
dotenv is Unmaintained
Details | |
---|---|
Status | unmaintained |
Package | dotenv |
Version | 0.15.0 |
URL | dotenv-rs/dotenv#74 |
Date | 2021-12-24 |
dotenv by description is meant to be used in development or testing only.
Using this in production may or may not be advisable.
The below may or may not be feasible alternative(s):
See advisory page for additional details.
Current Behavior
Transactions are sent sequentially.
Expected Behavior
We should have one thread per sending account.
With the v0.1.0 (#2) we created an MVP of the benchmarking tool. For the v0.2.0 we intend to improve the overall functionality and fix known errors/performance issues. Some of the tasks already have issues created or partially created.
We should be able to run gomu gomu no gatling in GH workflows.
yaml-rust is unmaintained.
Details | |
---|---|
Status | unmaintained |
Package | yaml-rust |
Version | 0.4.5 |
URL | rustsec/advisory-db#1921 |
Date | 2024-03-20 |
The maintainer seems unreachable.
Many issues and pull requests have been submitted over the years
without any response.
Consider switching to the actively maintained yaml-rust2
fork of the original project:
See advisory page for additional details.
Hey everyone! I have been interacting with the Gomu Gomu project for a few days now as I was implementing the cache layer over Madara. This gave me some time to think about some ideas for benchmark flows, and after discussing with @d-roak we decided to open an issue to talk about it.
I do not think that this bring anything new under the sun compared to the initial issue by @abdelhamidbakhta but I wanted to give my whole thinking process/
From my understanding there are a few elements driving the development of the project, but the main one seems to be to have a standard benchmark toolkit for every Starknet sequencers implementation. This pushes for a CLI that could be used over any sequencer RPC endpoint.
However, I think that each sequencer will also have the need of being able to implement easily their own benchmark flow, so that should be easily achieved, either in the sequencer repo or in this one.
By splitting the Gomu Gomu implementation in three crates (or two if three seems an overkill), I think we could achieve that. My idea was:
cli
crate: contains interface to interact with the shooter logic through a terminal. Practical to easily integrate in CI and use in a dev environmentlib
crate: contains the core logic of the shooter. Is responsible to run the benchmark flows, measure performances and generate reportsflow
crate: contains the logic for each benchmark flow, along with their own ephemeral state during testing.Here lib
and flow
could be merged but I liked the idea of having each crate responsible for different elements.
I think the current GatlingShooter
is really good at what it does and we should not touch it too much. However, I would change the way we run benchmark.
For starters, I would try to abstract a benchmark logic as much as possible from GatlingShooter
and making it generic to accept a trait we define. Taking from tester::Bencher
for inspiration we could have this method added to run flows:
impl GatlingShooter {
// Arguments available in closure should be iterated upon to identify everything needed during a benchmark.
fn iter<T>(&mut self, flow: Flow<T>,count: u64) {
// Run closure necessary number of time while measure the time taken each time
// Save all measured performances in GatlingShooter
}
Then we could declare flow over a trait:
trait Flow<T> {
// Called by GatlingShooter::iter() before running each closure
fn initialize(gatling_shooter: GatlingShooter) -> Result<Self, Error> where Self: Sized;
// Method to be run by GatlingShooter::iter()
fn execution(rpc: Arc<JsonRpcClient<HttpTransport>>, accounts: Vec<SingleOwnerAccount<Arc<JsonRpcClient<HttpTransport>>, LocalWallet>>) -> T;
// Maybe add finalize() here ?
}
If we consider having a flow
crate, all standard flows defined in Gomu Gomu could be exported over an enumeration to ease logic in the cli
crate. We could define a proc macro to auto-generate it at build time.
With this kind of setup it would also be quite easy for any project directly using the lib
crate directly to define their own flow to in Gomu Gomu.
Looking forward to here thoughts on this or answer any questions!
We should investigate why running gomu gomu with the v2.1.0
config is so slow.
The only difference is the version of contracts being used and thus execution encoding is ExecutionEncoding::New
instead of ExecutionEncoding::Legacy
.
This section aims to describe the tasks related to refining the testing framework, test data preparation, test case definitions, and reporting. The overarching objective is to enable the execution of a comprehensive set of tests on a specified sequencer configuration, yielding not only benchmark results but also metrics crucial for informed decision-making and analysis. The ultimate aim is the establishment of automated workflows capable of seamless execution across diverse target environments and load/performance configurations.
To achieve this goal, it might be necessary to establish specific smart contracts for testing purposes, allowing for the simulation of more complex business logic and smart contract interactions. The inclusion of these elements would enhance the thoroughness of testing scenarios. Reporting mechanisms will be implemented to capture and present benchmark results and metrics in a clear and accessible manner, facilitating effective analysis.
Include comprehensive documentation for setup, configurations, and usage.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.