Giter Club home page Giter Club logo

Comments (1)

MichaReiser avatar MichaReiser commented on May 24, 2024

I like your enthusiasm, and you come up with many great ideas!

First, I want to explain to you that the library intentionally splits the static transpilation into a WebPack and Babel plugin. The former is used to create a single bundle for a web environment. However, the primary work is done in the Babel plugin allowing to write a plugin for other bundlers easily. For Node, I don't think that we need a WebPack plugin since we have an entirely different situation. In the Web, it is preferred to load as many scripts upfront to reduce the number of requests needed (Code splitting is something for the future). On the contrary, requiring new scripts in Node.js is quite common. So, I believe, that for Node.js, a simple, per file transpilation step implemented in Babel is sufficient that generates a new file containing the code to execute in the background.

Next, some comments to your suggestion. I'm not going to say that it is impossible. I think it is possible to implement up to some degree. But static program analysis is quite limited and is often a balance between beeing conservative --- favoring correctness --- and reducing the number of false negatives. Furthermore, static analyzing JavaScript tends to be harder than other statically typed languages (or especially functional languages). But with time at hand, a workable and usable implementation should be possible (but I would start with supporting Node first ;))

Then, based on the CPU and memory available, a Webpack plugin could go into the code and rewrite functions to use parallel-es. It would ignore simple, fast functions because the up-front cost of parallelizing is greater than running them single-threaded. It may be optimized just on heuristics, or it may run multiple times in order to find the best solution (like my previous automated machine learning resource I posted before). Or a combination of both.

Rewriting loops is possible. There are also approaches to estimate the cost of a function without having the need to run it. However, executing a function should result in more precise figures. But, the program needs a way to determine the data over which a loop iterates? So probably the simplest approach is to instrument the whole application (e.g. using Babel istanbul but needs to be extended to add runtimes or perhaps the profile API of browsers?) and let the user run the application. After the run, collect the results and identify the most costly loops. Now comes the next difficulty. The static analysis needs to guarantee that a rewrite neither changes the semantics nor introduces any data races or race conditions (e.g. if the same array index is accessed from different threads). It can also be more subtitle. What if the race condition is caused by a overridden method in a subtype. The subtype might not be available at compile-time, or the test run does not contain any object instance of the subtype.

It might be easier to first focus on automatically rewriting lodash (or underscore) based loops before creating a program that can override arbitrary loops.

Optimizing based on hardware has a lot of advantages in a Node.js environment where the hardware is known. But there may still be a lot to gain in a browser-based environment. Maybe in most cases we can count on having 2-4 cores with graceful fallbacks. It's something to explore.

Do be honest; I never worked in a project where the hardware was the limiting factor. Most certainly because the majority of application that I have written are "boring" web applications. So it seems like you are more experienced in this field.

However, as I believe that this subject is quite complicated because static program analysis is limited in precision (or you have to wait forever) a runtime approach might be easier to achieve. Furthermore, I also believe that it might be non-trivial to use such a tool and maintain the saved configuration (what if a programmer made changes to a program, and, therefore, the line numbers no longer match?). If you take a look at Java or C#, both offer an API similar to parallel.es but have some smart scheduling strategy to determine the optimum of threads at runtime. So I think it would be interesting to see how the performance can be improved by using a "smarter" scheduling strategy, potentially with a work-stealing approach.

I would suggest to split the problem into smaller problems and address each separately. This allows making use of the improvements even if the end goal might never be reached (because of time or technical difficulties). Furthermore, I suggest first to find use cases that show the benefits of each of the described approaches. For example, how much does the performance improve if the optimal number of threads is determined upfront compared to an algorithm run at runtime? For this analysis, the optimum can be determined by hand. Or, what are examples of code fragments that the Babel plugin should recognize and rewrite?

So, I believe your ideas are fascinating and worth further exploration. But since time is always a crucial factor it might be needed to start small (at least from my side).

Cheers,
Micha

from parallel.es.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.