Giter Club home page Giter Club logo

Comments (16)

snshn avatar snshn commented on May 17, 2024 1

Here's what I found on performance regression testing so far:

Option 1

https://developers.google.com/web/tools/lighthouse/ (got there from https://stuartsandine.com/lighthouse-circle-ci/)

It may be possible to use that dummy Todo app (http://todomvc.com/) to measure Mikado's performance with Lighthouse, or just put together our own app from scratch. However, that seems to be more fitting for testing performance of apps.

Option 2

https://github.com/krausest/js-framework-benchmark (https://krausest.github.io/js-framework-benchmark/current.html)

From that project's README it seems like it's possible to run that testing framework against just one target UI framework. It may be worth to run both Mikado and VanillaJS, ensuring that we have the base point to measure against (e.g. if Travis CI's server becomes more or less performant we don't want to get false positives/negatives).

I could try to be social and reach out to @krausest asking for advice, he may be able to help Mikado adopt his testing framework by providing JSON output instead of just HTML tables... that would only require one simple Node.js script to parse VanillaJS and Mikado JSON outputs to compare the performance.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024 1

Thanks for the research.

I know Lighthouse and often use it. But for this kind of benchmark this can't be used. But we could use this tool additionally to get some further informations. A todo app would have a too big workflow. The tests I made are very fine grained and focus on a very specific job. That's an important base for meaningful results. Especially when we would use this results as a reference during development the roundtrip from a todo app will add too much noise.

I very like the js framework benchmark and I also take a deeper look into. Actually we can't use this tool, because it does not meet the minimum requirements. Getting back a meaningful feedback which could be used as benchmark reference is actually not possible with this tool. I just made some suggestions yesterday for improvements but it is not clear if they implement those improvements. Anyway due to the async concept which will always stay on this test tool will make it unusable as a benchmark reference tool.

The most simple solution I came up is:

  • we already have fine grained tests
  • we already have a simple test environment which runs each test in an iframe (sandbox) and communicate the results back
  • we already have a test environment which runs through chromedriver
  • so we just need to extend/copy the existing test environment, runs those via chromedriver and benchmark each new build/version against a reference version (the reference version will be replaced when a new version has improved performance)

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024 1

@snshn It is basically done and also deployed.

goto test folder:

cd test

run benchmark test:

npm run test:bench

run all tests:

npm run test

example: https://travis-ci.org/nextapps-de/mikado

Sure, it is not finalized, I'm a fan of scaling things along the needs.

Actually the maximum difference could be 20, to be very save we could lower this to 5 ... but that also will produce "false-positives" and that would needs a re-run.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024 1

The codebase is heavily optimized for google V8 and the Blink browser engine. It would be better when using the most latest version, things may change in future and we shouldn’t benchmark against an outdated reference environment. Actually the bench criteria Test runs two times, we can increase that value easily.

I use puppeteer actually, if you like you can switch to chromedriver directly, it is not very complicated from the current base. The test/bench/runner.js is the main script which init the test.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024 1

Probably best is to use webdriver, it’s pretty much the same but we can run it also on Firefox and Safari...

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024 1

I lowered the maximum allowed deviation to 5 of 1000 (0.5%), that is pretty small. Also repeating of the whole test was increased to 10 (runs with option "keep best run" enabled) to produce less false-positives. When we have something new, just let open a new issue ticket and/or a pull request.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024

Could you make this? Do you need anything to start with?

from mikado.

snshn avatar snshn commented on May 17, 2024

I'll try to find a way. There's this framework where Mikado scores first, we could record performance results from running it and store it in the repo as a json/yaml file, then run it each time in the pipeline and make sure the new values don't exceed the existing ones by 0.5% or so (there's always going to be a slight deviation every time you run it, we can only throttle the performance creep... unless we make those tests terminal to the point where every new PR would require to make Mikado only faster by at least 0.1%... but that's generally not realistic).

If we take this approach, would you like us to store the file within this repo somewhere under test/, within a separate (protected branch), or within a separate repo under your organization?

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024

Maybe there is a very simple solution to provide those test with a minimum work. I will try it later today and report my results ...

from mikado.

snshn avatar snshn commented on May 17, 2024

That sounds like a solid plan. I could help creating additional tests in VanillaJS to match the tasks accomplished by Mikado. I'm worrying that the speed will vary greatly across different machines and different browsers, hence we most likely will need to compare the performance against vanilla tasks for proper benchmarking.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024

Vanilla tasks might not work, because mikado is faster than a vanilla implementation 😊. Probably it is the first rendering lib which has this attribute. We need to use a reference version.

(If I would implement a vanilla version I pretty end up with the light version of mikado.)

from mikado.

ryansolid avatar ryansolid commented on May 17, 2024

Probably it is the first rendering lib which has this attribute.

This has happened several times in the past. We've had other libraries run faster than Vanilla JS in the last 4 months. Vanilla JS implementations is maintained by people so as we learn new techniques we improve on it. But I agree with the short coming of this approach.

I think you have much more control over the JS execution implementation timing as suggested. Every browser update changes things up even proportionally between a library and VanillaJS in the JS Framework Benchmark. I've watched the same version of Solid be in front of both Vanilla Implementations (a few weeks Mikado was submitted 😄 ) to diving down behind 4 other libraries on the list without any code changes. Some of it is run differences. Some is it proportionally differences with browser changes. Some versions of Chrome are just slower than others and I find in those cases all libraries are proportionally worse compared to Vanilla. When the browser update is more performant all libraries proportionally perform better but the weighting of different tests affect how the overall scoring works out.

I've been known to just submit a different patch version without any relevant code changes to just have a shot at Stefan re-running the test. After this Chrome 78 test I was really tempted. Really bad runs for Solid on select row (you can tell by the box plots).

In any case fit isn't something I think you could depend on for this purpose. Too close to the threshold. It's a cool idea none the less. I've thought about it too. Similar issues with PRs to Solid. People want to help out but they don't want to mess with performance since they know I benchmark everything to tweak before I release, even if it is some abstract feature. I'd be super interested if you guys figure something out here. I think many performance libraries would be.

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024

@ryansolid I appreciate your opinion very much. "Probably it is the first rendering lib which has this attribute." you are right, that could be misunderstood. Theoretically a vanilla implementation is always faster. Theoretically. When I could try to provide a vanilla implementation for the test cases of this benchmark, I end up either:

  • the current mikado library code
  • an complete enrolled implementation which hugely increasing the file size (required lines of code)

And that is something new, a vanilla implementation will probably not claim any benefit.

from mikado.

ryansolid avatar ryansolid commented on May 17, 2024

Right that makes sense. The difference being the base mikado implementation already doesn't do anything unnecessary so you'd basically be just flattening out that behavior which would end up in more code unless you refactored it into a library. Most other libraries do atleast some amount of unnecessary work related how they generalize their data, reconcile etc...

from mikado.

ts-thomas avatar ts-thomas commented on May 17, 2024

Yes that’s true. But luckily we have build flags for this purpose. Reducing the build will include very less additional code, at least nothing which has any impact on performance e.g. like an additional loop.

from mikado.

snshn avatar snshn commented on May 17, 2024

We could lock browser versions for performance tests, bumping them once a month or couple times a year, whenever needed.
Not sure how to limit the performance of the virtual environment itself, but I could do something like write a basic browser in Qt5 which has performance limit built into it, this way we could use that sandbox to run tests and Mikado would always perform the same, no matter what the hardware/VM is. I'll ask around regarding that, if we find a way we could start a battle of frameworks 😂
As for thresholds, we could run tests 10 times each to reduce the amount of false positives.

from mikado.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.