Comments (16)
Here's what I found on performance regression testing so far:
Option 1
https://developers.google.com/web/tools/lighthouse/ (got there from https://stuartsandine.com/lighthouse-circle-ci/)
It may be possible to use that dummy Todo app (http://todomvc.com/) to measure Mikado's performance with Lighthouse, or just put together our own app from scratch. However, that seems to be more fitting for testing performance of apps.
Option 2
https://github.com/krausest/js-framework-benchmark (https://krausest.github.io/js-framework-benchmark/current.html)
From that project's README it seems like it's possible to run that testing framework against just one target UI framework. It may be worth to run both Mikado and VanillaJS, ensuring that we have the base point to measure against (e.g. if Travis CI's server becomes more or less performant we don't want to get false positives/negatives).
I could try to be social and reach out to @krausest asking for advice, he may be able to help Mikado adopt his testing framework by providing JSON output instead of just HTML tables... that would only require one simple Node.js script to parse VanillaJS and Mikado JSON outputs to compare the performance.
from mikado.
Thanks for the research.
I know Lighthouse and often use it. But for this kind of benchmark this can't be used. But we could use this tool additionally to get some further informations. A todo app would have a too big workflow. The tests I made are very fine grained and focus on a very specific job. That's an important base for meaningful results. Especially when we would use this results as a reference during development the roundtrip from a todo app will add too much noise.
I very like the js framework benchmark and I also take a deeper look into. Actually we can't use this tool, because it does not meet the minimum requirements. Getting back a meaningful feedback which could be used as benchmark reference is actually not possible with this tool. I just made some suggestions yesterday for improvements but it is not clear if they implement those improvements. Anyway due to the async concept which will always stay on this test tool will make it unusable as a benchmark reference tool.
The most simple solution I came up is:
- we already have fine grained tests
- we already have a simple test environment which runs each test in an iframe (sandbox) and communicate the results back
- we already have a test environment which runs through chromedriver
- so we just need to extend/copy the existing test environment, runs those via chromedriver and benchmark each new build/version against a reference version (the reference version will be replaced when a new version has improved performance)
from mikado.
@snshn It is basically done and also deployed.
goto test folder:
cd test
run benchmark test:
npm run test:bench
run all tests:
npm run test
example: https://travis-ci.org/nextapps-de/mikado
Sure, it is not finalized, I'm a fan of scaling things along the needs.
Actually the maximum difference could be 20, to be very save we could lower this to 5 ... but that also will produce "false-positives" and that would needs a re-run.
from mikado.
The codebase is heavily optimized for google V8 and the Blink browser engine. It would be better when using the most latest version, things may change in future and we shouldn’t benchmark against an outdated reference environment. Actually the bench criteria Test runs two times, we can increase that value easily.
I use puppeteer actually, if you like you can switch to chromedriver directly, it is not very complicated from the current base. The test/bench/runner.js
is the main script which init the test.
from mikado.
Probably best is to use webdriver, it’s pretty much the same but we can run it also on Firefox and Safari...
from mikado.
I lowered the maximum allowed deviation to 5 of 1000 (0.5%), that is pretty small. Also repeating of the whole test was increased to 10 (runs with option "keep best run" enabled) to produce less false-positives. When we have something new, just let open a new issue ticket and/or a pull request.
from mikado.
Could you make this? Do you need anything to start with?
from mikado.
I'll try to find a way. There's this framework where Mikado scores first, we could record performance results from running it and store it in the repo as a json/yaml file, then run it each time in the pipeline and make sure the new values don't exceed the existing ones by 0.5% or so (there's always going to be a slight deviation every time you run it, we can only throttle the performance creep... unless we make those tests terminal to the point where every new PR would require to make Mikado only faster by at least 0.1%... but that's generally not realistic).
If we take this approach, would you like us to store the file within this repo somewhere under test/, within a separate (protected branch), or within a separate repo under your organization?
from mikado.
Maybe there is a very simple solution to provide those test with a minimum work. I will try it later today and report my results ...
from mikado.
That sounds like a solid plan. I could help creating additional tests in VanillaJS to match the tasks accomplished by Mikado. I'm worrying that the speed will vary greatly across different machines and different browsers, hence we most likely will need to compare the performance against vanilla tasks for proper benchmarking.
from mikado.
Vanilla tasks might not work, because mikado is faster than a vanilla implementation
(If I would implement a vanilla version I pretty end up with the light version of mikado.)
from mikado.
Probably it is the first rendering lib which has this attribute.
This has happened several times in the past. We've had other libraries run faster than Vanilla JS in the last 4 months. Vanilla JS implementations is maintained by people so as we learn new techniques we improve on it. But I agree with the short coming of this approach.
I think you have much more control over the JS execution implementation timing as suggested. Every browser update changes things up even proportionally between a library and VanillaJS in the JS Framework Benchmark. I've watched the same version of Solid be in front of both Vanilla Implementations (a few weeks Mikado was submitted
I've been known to just submit a different patch version without any relevant code changes to just have a shot at Stefan re-running the test. After this Chrome 78 test I was really tempted. Really bad runs for Solid on select row (you can tell by the box plots).
In any case fit isn't something I think you could depend on for this purpose. Too close to the threshold. It's a cool idea none the less. I've thought about it too. Similar issues with PRs to Solid. People want to help out but they don't want to mess with performance since they know I benchmark everything to tweak before I release, even if it is some abstract feature. I'd be super interested if you guys figure something out here. I think many performance libraries would be.
from mikado.
@ryansolid I appreciate your opinion very much. "Probably it is the first rendering lib which has this attribute." you are right, that could be misunderstood. Theoretically a vanilla implementation is always faster. Theoretically. When I could try to provide a vanilla implementation for the test cases of this benchmark, I end up either:
- the current mikado library code
- an complete enrolled implementation which hugely increasing the file size (required lines of code)
And that is something new, a vanilla implementation will probably not claim any benefit.
from mikado.
Right that makes sense. The difference being the base mikado implementation already doesn't do anything unnecessary so you'd basically be just flattening out that behavior which would end up in more code unless you refactored it into a library. Most other libraries do atleast some amount of unnecessary work related how they generalize their data, reconcile etc...
from mikado.
Yes that’s true. But luckily we have build flags for this purpose. Reducing the build will include very less additional code, at least nothing which has any impact on performance e.g. like an additional loop.
from mikado.
We could lock browser versions for performance tests, bumping them once a month or couple times a year, whenever needed.
Not sure how to limit the performance of the virtual environment itself, but I could do something like write a basic browser in Qt5 which has performance limit built into it, this way we could use that sandbox to run tests and Mikado would always perform the same, no matter what the hardware/VM is. I'll ask around regarding that, if we find a way we could start a battle of frameworks
As for thresholds, we could run tests 10 times each to reduce the amount of false positives.
from mikado.
Related Issues (20)
- issues when mounting on same root HOT 1
- for with if example gives js error HOT 6
- Add morphdom to benchmarks HOT 2
- Xone Reboot
- Compiling the Todo demo fails HOT 2
- Basic demo es6 does not work HOT 2
- Renderer add div to text element HOT 1
- Condition on {{...}} does not work as expected HOT 2
- Issue when used together with Webpack? HOT 3
- starting out with Mikado HOT 1
- view.data(node) returns undefined on event handler HOT 2
- The docs state that we can refresh by node reference but the code that should handle it is commented out HOT 1
- The current reconcile algorithm will be slow in theory HOT 2
- Error in benchmark test on https://mikado.js.org/
- Input max value in templates is removed when rendered HOT 1
- Support for compiling templates containing svg tags HOT 5
- what is the main diffrence beween Keyed/Non-Keyed Modes ? HOT 4
- broken link in readme TOC
- possible to do a pwa / real world app example? HOT 1
- Nesting views? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mikado.