Comments (11)
I like the idea of flipping the order. I'm just getting back from vacation and catching up on a bunch of stuff. Let's get the environment stable, and then I think this is easy to do. Will leave this open until we get it in.
from frameworkbenchmarks.
Flipping the order each time makes sense to me.
from frameworkbenchmarks.
What I maintain starts with u
, so I'm heavily biased here, but I would also appreciate this change being implemented.
My concern is not about failures or restarts, as they usually don't happen that often when the environment is stable, but rather about a feedback latency:
I mostly use TFB as a measurement tool (and a big shout-out to TE crew for providing that tool), and given a hypothetical performance drop in the ongoing run, I'm left with approx. a day to squeeze a potential fix into the next measurement, and a failure to do so would lead to a feedback latency of two full weeks (every run is approx. a week).
Moreover, any dependency bump I do is at least a week (an almost full run) in terms of feedback latency, and 1.5 weeks on average.
Flipping the order between runs (or FWIW randomizing it) would significantly reduce these latencies for me.
from frameworkbenchmarks.
That's because the tfb-startup.sh script runs tfb-shutdown.sh on startup; the latter is responsible for flipping the order. Is changing the order only after an unsuccessful run by design?
No, I forgot that we actually run the shutdown script twice after a successful run because it's being called from the startup script as well. The design was supposed to be the exact opposite. I'll have to move it to the startup script and it will just reverse every time a run starts.
from frameworkbenchmarks.
As someone maintaining a benchmark starts with x
I feel this.
That said IMO a fair way of handling order is to prioritize benches with most recent changes. As for benches that haven't changed for a while I guess the maintainers would care less about continuous run result.
from frameworkbenchmarks.
I think that is enough to start one run with a
and the next with z
.
And perhaps we still see some differences in the results.
This run and the next, change the servers, databases, ... so the change it's for all frameworks.
Don't depend from the changes in the frameworks.
A mature framework need less changes than a young one. Still we can bench in local to test small changes.
from frameworkbenchmarks.
The frameworks that stay in the middle have ~3 days to make changes.
It's the same if the bench begin with a or in reverse order.
The problem is the frameworks that are the last in the run.
Please don't randomize, now we almost know when are the result for our framework.
But we need to flip the order in any new run !!
from frameworkbenchmarks.
After finish the last full run, the next run did not flip the order.
from frameworkbenchmarks.
That's because the tfb-startup.sh script runs tfb-shutdown.sh
on startup; the latter is responsible for flipping the order. Is changing the order only after an unsuccessful run by design?
from frameworkbenchmarks.
I think the following run was reversed: https://tfb-status.techempower.com/results/3c2e9871-9c2a-4ff3-bc31-620f65da4e74. The “last framework” tested is incorrect though.
from frameworkbenchmarks.
@NateBrady23 It looks like now the opposite thing is happening - the order is always reversed, i.e. the implementations starting with Z
run first.
from frameworkbenchmarks.
Related Issues (20)
- New execution mode "profiling" HOT 6
- Enhancement request: disable pg_stat_statements when running anything but validation
- PHP 8.3 update [info]
- Inconsistent composite score best score computation HOT 2
- Holiday Break HOT 8
- Where to find the exact code that was used for Round22? HOT 1
- Expired SSL `tfb-status.techempower.com` HOT 1
- Actix failing build HOT 9
- New Server Set up HOT 57
- Most of the best-performing frameworks don't survive temporary db connectivity loss HOT 25
- [IMPORTANT] To further test fairness. Please consider this for next benchmark tests. HOT 1
- gRPC Framework benchmark HOT 1
- Please upgrade hardware for extreme testings HOT 1
- Increase concurrency for future rounds to better showcase performance on large servers HOT 2
- Add GraalVM native build for Quarkus HOT 3
- Add container name in tfb for local metrics HOT 1
- Create benchmarks using seastar HOT 1
- [Python] mrhttp: a project with many problems HOT 2
- What is the policy for removing unmaintained frameworks? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from frameworkbenchmarks.