Giter Club home page Giter Club logo

purescript-backend-optimizer's Introduction

purescript-backend-optimizer

An optimizing backend toolkit for PureScript's CoreFn.

Overview

PureScript's built-in optimizer leaves a lot on the table by only performing naive syntactic rewrites in the JavaScript specific backend. purescript-backend-optimizer consumes the compiler's high-level IR (CoreFn) and applies a more aggressive inlining pipeline (subsuming existing optimizations) that is backend agnostic.

It additionally ships with an alternative code-generator which outputs modern ECMAScript with additional runtime optimizations, resulting in lighter, faster bundles.

Example Input purs purs-backend-es
Lenses Input Output Output
Prisms Input Output Output
Variant Input Output Output
Heterogeneous Input Output Output
Uncurried CPS Input Output Output
Generics Input Output Output
Fusion (Fold) Input Output Output
Fusion (Unfold) Input Output Output
Recursion Schemes Input Output Output
HTML DSL Input Output Output
Imperative Loops Input Output Output

ECMAScript Backend

Install

npm install purs-backend-es

Usage

purs-backend-es requires PureScript 0.15.4 or greater. Add it as a backend in your spago.dhall.

+, backend = "purs-backend-es build"

You should likely only do this for a production build configuration, since optimization and code-generation are currently not incremental. For example, you can create a separate prod.dhall with the following:

./spago.dhall // { backend = "purs-backend-es build" }

By default, purs-backend-es will read corefn.json files from output, and generate code in output-es following the same directory pattern as the compiler's JS backend.

See the CLI help for options:

purs-backend-es --help

spago bundle-app is not compatible with purs-backend-es. To bundle your app, you can invoke purs-backend-es bundle-app. It supports the same CLI arguments as spago bundle-app.

spago build && purs-backend-es bundle-app --no-build

Notable Differences from purs

  • Uses arrow functions, const/let block scope, and object spread syntax.
  • Uses a much lighter-weight data encoding (using plain objects) which is significantly faster to dispatch. By default, we use string tags, but integer tags are also supported via a CLI flag for further performance improvement and size reduction.
  • Newtypes over Effect and ST also benefit from optimizations. With general inlining, even instances that aren't newtype-derived benefit from the same optimizations.
  • TCO fires in more cases. For example, you can now write TCO loops over purescript-exists because the eliminator is inlined away.
  • TCO supports mutually recursive binding groups.
  • Optimized pattern matching eliminates redundant tests.

Code size and performance improvement varies by usecase, but we've generally observed:

  • 25-35% improvement in runtime.
  • 20-25% improvement in minified bundle size.
  • 15-20% improvement in minified+gzip bundle size.

Inlining Directives

The inliner follows some basic heuristics, but to get the most out of it you should configure inlining directives. An inlining directive tells the optimizer under what conditions it should inline a definition.

The following inlining directives are supported:

  • default - A definition is inlined using default heuristics (unspecified).
  • never - A definition is never inlined.
  • always - A definition is inlined at every reference.
  • arity=n - Where n is a positive integer, a definition is inlined when at least n arguments are applied.

An inlining directive may be applied to a top-level binding or top-level accessor.

Syntax

module Example where

import Prelude

myAdd :: Int -> Int -> Int
myAdd a b = a + b

The myAdd function would likely already be inlined since it is so small, but to guarantee that it is always inlined after two arguments are applied, you would write the following directive:

Example.myAdd arity=2

For instance methods, you should use named instances and a top-level accessor:

module Example where

import Prelude

data MyData = Zero | One

instance semigroupMyData :: Semigroup MyData where
  append = case _, _ of
    Zero, _ -> Zero
    _, Zero -> Zero
    _, _ -> One
Example.semigroupMyData.append arity=2

It's possible to refer to unnamed instances through their compiler-generated name, however this is quite difficult to maintain.

Sometimes instances are parameterized by other constraints:

module Example where

import Prelude

data Product f g a = Product (f a) (g a)

instance functorProduct :: (Functor f, Functor g) => Functor (Product f g) where
  map f (Product a b) = Product (f <$> a) (f <$> b)
Example.functorProduct(..).map arity=2

Note the (..) between the name and the accessor, which will match applications of known instance dictionaries.

Configuration

Inlining directives can be configured in three ways:

Module-specific inlining directives via a module header

In any given module header you can add @inline comments with the above syntax:

-- @inline Example.myAdd arity=2
module AnotherExample where

import Example
...

Directives configured this way only apply to the current module.

Global inlining directives via a module header

In any given module header, you can add @inline export directives for definitions in the current module:

-- @inline export myAdd arity=2
-- @inline export semigroupMyData.append arity=1
module Example where
...

Directives configured this way apply to the current module and downstream modules.

Note: They must be defined in the module header to due to an upstream compiler limitation.

Global inlining directives via a configuration file

You can provide a directive file to purs-backend-es:

purs-backend-es build --directives my-directives.txt

Each line should contain an inlining directive using the above syntax, with the additional support of -- line comments. These directives will take precedence over any defaults or exported directives, so you can tweak inlining for your project however you see fit.

Cheatsheet

Precedence applies in the following order (most specific to least specific):

Location Affects
Module A's header, @inline module B directive Module B's usages in module A
Directives file All modules
Module A's header, @inline export module A directive Module A's usages in all modules
Default heuristics All modules

Tracing Optimizations

purs-backend-es can also trace the rewrite passes taken when optimizing a top-level expression via the --trace-rewrites CLI arg. This may help in debugging an unexpected or non-optimal result.

Semantics

purescript-backend-optimizer consumes the PureScript compiler's high-level intermediate representation (IR) known as CoreFn. CoreFn has no defined evaluation semantics, but we operate under assumptions based on common use:

  • We make decisions on what to keep or discard using Fast and Loose Reasoning, assuming that CoreFn is pure and total.

    In practical terms, this means we may take the opportunity to remove any code that we know for certain is not demanded. However, at times we may also choose to propagate known bottoms. Thus, non-totality is considered undefined behavior for the purposes of CoreFn's semantics.

  • We preserve sharing of redexes under common assumptions of call-by-value semantics. Like non-totality, we consider a specific evaluation order to be undefined behavior in CoreFn. However, we assume that all terms under a redex should be in normal form.

    In practical terms, this means we will not delay function arguments that most would expect to be evaluated immediately.

purescript-backend-optimizer's People

Contributors

anttih avatar denisgorbachev avatar htmue avatar jam-awake avatar jordanmartinez avatar mikesol avatar monoidmusician avatar natefaubion avatar purefunctor avatar sigma-andex avatar unisay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

purescript-backend-optimizer's Issues

Benchmark alternative Map operation implementations

Either as a new datatype, or using Map Internals.

Profiling shows the largest cost center is analysis (which makes sense), but specifically within Map insert and union, both of which could be improved performance wise. Union specifically is a naive implementation which iterates over one Map, inserting each key into the other.

CI currently crashes on Windows

See these builds:

[info] Build succeeded.
node:events:491
      throw er; // Unhandled 'error' event
      ^

Error: spawn spago ENOENT
    at Process.ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:485:16)
    at processTicksAndRejections (node:internal/process/task_queues:83:21)
Emitted 'error' event on ChildProcess instance at:
    at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
    at onErrorNT (node:internal/child_process:485:16)
    at processTicksAndRejections (node:internal/process/task_queues:83:21) {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'spawn spago',
  path: 'spago',
  spawnargs: [ 'build', '-u', '-g corefn' ]
}

Add option to remove pattern match failure assertions

In line with the main compiler, we insert pattern match failure assertions ($runtime.fail()) when there is no wildcard pattern. Since the pattern matching tree optimizer emits complete if/else trees, we should be able to omit the assertion when a branch covers all tags for a given ProperName.

This failure assertion is potentially useful for catching foreign errors sneaking through, so I think this could be an option like --int-tags.

Add support for re-exports in codegen.

Currently we do not emit re-exports in codegen (purs does). This is because nothing is really private. Through inlining, we need to be able to refer to top-level bindings that were not explicitly exported. This conflicts with the naming scheme. used in the compiler's CoreFn CSE pass.

An example from Aff is try. It re-exports try, but the CSE pass also creates a top-level binding named try, which has a dictionary applied. We can't export both under the same, and this leads to a duplicate export error.

In order to fix this, we need to rename private top-level bindings during codegen, probably with a $priv suffix.

Highly compactable output example

I compiled this code, part of a purs tidy fork:

dropTriviallyUnnecessaryParens e =
  case e of
    ExprParens
      (Wrapped
        {value: e2
        , open: { leadingComments : [], trailingComments: [] }
        , close: { leadingComments : [], trailingComments: [] }
        }
      ) ->
      case e2 of
        ExprHole _ -> dropTriviallyUnnecessaryParens e2
        ExprIdent _ -> dropTriviallyUnnecessaryParens e2
        ExprBoolean _ _ -> dropTriviallyUnnecessaryParens e2
        ExprChar _ _ -> dropTriviallyUnnecessaryParens e2
        ExprString _ _ -> dropTriviallyUnnecessaryParens e2
        ExprInt _ _ -> dropTriviallyUnnecessaryParens e2
        ExprNumber _ _ -> dropTriviallyUnnecessaryParens e2
        ExprArray _ -> dropTriviallyUnnecessaryParens e2
        ExprRecord _ -> dropTriviallyUnnecessaryParens e2
        ExprParens _ -> dropTriviallyUnnecessaryParens e2
        ExprRecordAccessor _ -> dropTriviallyUnnecessaryParens e2
        _ -> e
    _ -> e

to this:

var dropTriviallyUnnecessaryParens = (dropTriviallyUnnecessaryParens$a0$copy) => {
  let dropTriviallyUnnecessaryParens$a0 = dropTriviallyUnnecessaryParens$a0$copy, dropTriviallyUnnecessaryParens$c = true, dropTriviallyUnnecessaryParens$r;
  while (dropTriviallyUnnecessaryParens$c) {
    const e = dropTriviallyUnnecessaryParens$a0;
    if (e.tag === "ExprParens") {
      if (e._1.close.leadingComments.length === 0) {
        if (e._1.close.trailingComments.length === 0) {
          if (e._1.open.leadingComments.length === 0) {
            if (e._1.open.trailingComments.length === 0) {
              if (e._1.value.tag === "ExprHole") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprIdent") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprBoolean") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprChar") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprString") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprInt") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprNumber") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprArray") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprRecord") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprParens") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              if (e._1.value.tag === "ExprRecordAccessor") {
                dropTriviallyUnnecessaryParens$a0 = e._1.value;
                continue;
              }
              dropTriviallyUnnecessaryParens$c = false;
              dropTriviallyUnnecessaryParens$r = e;
              continue;
            }
            dropTriviallyUnnecessaryParens$c = false;
            dropTriviallyUnnecessaryParens$r = e;
            continue;
          }
          dropTriviallyUnnecessaryParens$c = false;
          dropTriviallyUnnecessaryParens$r = e;
          continue;
        }
        dropTriviallyUnnecessaryParens$c = false;
        dropTriviallyUnnecessaryParens$r = e;
        continue;
      }
      dropTriviallyUnnecessaryParens$c = false;
      dropTriviallyUnnecessaryParens$r = e;
      continue;
    }
    dropTriviallyUnnecessaryParens$c = false;
    dropTriviallyUnnecessaryParens$r = e;
  }
  return dropTriviallyUnnecessaryParens$r;
};

which can be compacted down so much! I'm sure this would be a fun test case to develop new optimizations against :)

Optimizer crashes when inline annotation on value declaration is used with `never`

Given this snapshot:

-- @inline Snapshot.Prime.foo never
module Snapshot.Prime where

foo :: String
foo = "foo"

useFooPrime3 :: String
useFooPrime3 = foo

I get the following error:

[283 of 319] Building Snapshot.PrimOpString03
[284 of 319] Building Snapshot.Prime
file:///.../purescript-backend-optimizer/output/Effect.Aff/foreign.js:530
                throw util.fromLeft(step);
                ^

Error: Snapshot.Prime.useFooPrime3: Possible infinite optimization loop.
    at _crashWith (file:///.../purescript-backend-optimizer/output/Partial/foreign.js:4:9)
    at file:///.../purescript-backend-optimizer/output/Partial.Unsafe/index.js:8:16
    at _unsafePartial (file:///.../purescript-backend-optimizer/output/Partial.Unsafe/foreign.js:4:10)
    at Module.unsafeCrashWith (file:///.../purescript-backend-optimizer/output/Partial.Unsafe/index.js:7:12)
    at $tco_loop (file:///.../purescript-backend-optimizer/output/PureScript.Backend.Optimizer.Semantics/index.js:3928:55)
    at file:///.../purescript-backend-optimizer/output/PureScript.Backend.Optimizer.Semantics/index.js:3943:43
    at file:///.../purescript-backend-optimizer/output/PureScript.Backend.Optimizer.Convert/index.js:1506:235
    at file:///.../purescript-backend-optimizer/output/Data.Traversable/index.js:566:36
    at file:///.../purescript-backend-optimizer/output/Data.Traversable.Accum.Internal/index.js:31:34
    at file:///.../purescript-backend-optimizer/output/Data.Traversable/index.js:568:24

Node.js v18.7.0
[error] Tests failed: exit code: 1

The code builds if I remove the inline annotation:

--- @inline Snapshot.Prime.foo never
module Snapshot.Prime where

foo :: String
foo = "foo"

useFooPrime3 :: String
useFooPrime3 = foo

Runtime error with MonadEffect

I'm on version 1.1.1.

main :: forall m. MonadEffect m => m Unit
main = do
  logShow 1

generates this code:

(() => {
  // output-es/Effect.Console/foreign.js
  var log = function(s) {
    return function() {
      console.log(s);
    };
  };

  // output-es/Main/index.js
  var main = (dictMonadEffect) => dictMonadEffect.liftEffect(log("1"));

  // <stdin>
  main();
})();

which gives a runtime error, since dictMonadEffect is undefined.

Replacing main's type with Effect Unit generates this code:

(() => {
  // output-es/Effect.Console/foreign.js
  var log = function(s) {
    return function() {
      console.log(s);
    };
  };

  // output-es/Main/index.js
  var main = /* @__PURE__ */ log("1");

  // <stdin>
  main();
})();

which runs fine

Name clash between module and constructor name

When a data constructor has the same name as an imported module, in the generated javascript they also have the same name resulting in an error when run

Here is a minimal example

Bar.purs

module Bar where

import Prelude

-- This function does not do anything it only needs to be significantly complicated
-- so that it won't be inlined and instead the module it is in imported
placeHolder :: Int -> String
placeHolder i | mod 2 i == 0 = "2"
placeHolder i | mod 3 i == 0 = "2"
placeHolder i | mod 5 i == 0 = "2"
placeHolder i | mod 7 i == 0 = "2"
placeHolder _ = "yipee"

Main.purs

module Main where

import Prelude

import Bar (placeHolder)
import Effect (Effect)
import Effect.Console (log)

data FooBar = Foo  | Bar

main :: Effect Unit
main = do
  -- use placeholder to ensure it is actually in the generated javascript
  log $ placeHolder 3

Here Bar is the name of a data constructor for the `FooBar' type and the name of a module
The generated javascript using purescript-backend-optimizer is:

import * as Bar from "../Bar/index.js";
import * as Effect$dConsole from "../Effect.Console/index.js";
const $FooBar = tag => tag;
const Foo = /* #__PURE__ */ $FooBar("Foo");
const Bar = /* #__PURE__ */ $FooBar("Bar");
const main = /* #__PURE__ */ Effect$dConsole.log(/* #__PURE__ */ Bar.placeHolder(3));
export {$FooBar, Bar, Foo, main};

Running this gives an error

✘ [ERROR] The symbol "Bar" has already been declared

    output-es/Main/index.js:5:6:
      5 │ const Bar = /* #__PURE__ */ $FooBar("Bar");
        ╵       ~~~

  The symbol "Bar" was originally declared here:

    output-es/Main/index.js:1:12:
      1 │ import * as Bar from "../Bar/index.js";
        ╵             ~~~

The default purescript backend renames the imported module to "Bar_1" when generating the javascript removing the name clash and this seems like the easiest way to fix this issue.
Javascript generated by default backend

import * as Bar_1 from "../Bar/index.js";
import * as Effect_Console from "../Effect.Console/index.js";
var Foo = /* #__PURE__ */ (function () {
    function Foo() {

    };
    Foo.value = new Foo();
    return Foo;
})();
var Bar = /* #__PURE__ */ (function () {
    function Bar() {

    };
    Bar.value = new Bar();
    return Bar;
})();
var main = /* #__PURE__ */ Effect_Console.log(/* #__PURE__ */ Bar_1.placeHolder(3));
export {
    Foo,
    Bar,
    main
};

purs-backend-es v1.4.2 used

Bundler outputs sorting function that hangs in an infinite loop

It looks like something about the List.sortBy code-gen'd function hangs in an infinite loop. I've copied the relevant code below. I edited the list by hand to make the repro more minimal - I'm pretty sure it's what the codegen would have output, but I'm not sure.

My preliminary analysis indicates that sequedesceascen doesn't get called with enough arguments, so mergeAll doesn't work.

The compiler's JS backend doesn't have this issue and seems to call descending and ascending with the correct number of arguments.

Let me know if I can help solve this!

const data = {"tag":"Cons","_1":{"version":{"tag":"Version","_1":0},"name":{"tag":"Just","_1":"intro 1"},"marker4Time":3.140395482159665,"marker4AudioURL":{"tag":"Nothing"},"marker3Time":2.350910584410028,"marker3AudioURL":{"tag":"Nothing"},"marker2Time":1.649146230854796,"marker2AudioURL":{"tag":"Nothing"},"marker1Time":0.947381,"marker1AudioURL":{"tag":"Nothing"},"column":7},"_2":{"tag":"Nil"}}

var ordNumberImpl = function (lt) {
    return function (eq) {
      return function (gt) {
        return function (x) {
          return function (y) {
            return x < y ? lt : x === y ? eq : gt;
          };
        };
      };
    };
  };
const $Ordering = tag => ({tag});
const LT = /* #__PURE__ */ $Ordering("LT");
const GT = /* #__PURE__ */ $Ordering("GT");
const EQ = /* #__PURE__ */ $Ordering("EQ");
const $List = (tag, _1, _2) => ({tag, _1, _2});
const Nil = /* #__PURE__ */ $List("Nil");
const Cons = value0 => value1 => $List("Cons", value0, value1);
const ordNumber = {compare: /* #__PURE__ */ ordNumberImpl(LT)(EQ)(GT), Eq0: () => eqNumber};
sortingFunction = x => y => ordNumber.compare(x.marker1Time)(y.marker1Time)
const sortBy = cmp => {
    const merge = v => v1 => {
      if (v.tag === "Cons") {
        if (v1.tag === "Cons") {
          if (cmp(v._1)(v1._1).tag === "GT") { return $List("Cons", v1._1, merge(v)(v1._2)); }
          return $List("Cons", v._1, merge(v._2)(v1));
        }
        if (v1.tag === "Nil") { return v; }
        fail();
      }
      if (v.tag === "Nil") { return v1; }
      if (v1.tag === "Nil") { return v; }
      fail();
    };
    const mergePairs = v => {
      if (v.tag === "Cons") {
        if (v._2.tag === "Cons") { return $List("Cons", merge(v._1)(v._2._1), mergePairs(v._2._2)); }
        return v;
      }
      return v;
    };
    const mergeAll = mergeAll$a0$copy => {
      let mergeAll$a0 = mergeAll$a0$copy, mergeAll$c = true, mergeAll$r;
      while (mergeAll$c) {
        const v = mergeAll$a0;
        if (v.tag === "Cons") {
          if (v._2.tag === "Nil") {
            mergeAll$c = false;
            mergeAll$r = v._1;
            continue;
          }
          mergeAll$a0 = mergePairs(v);
          continue;
        }
        mergeAll$a0 = mergePairs(v);
        continue;
      };
      return mergeAll$r;
    };
    const sequedesceascen = sequedesceascen$b$copy => sequedesceascen$a0$copy => sequedesceascen$a1$copy => sequedesceascen$a2$copy => sequedesceascen$a3$copy => {
      let sequedesceascen$b = sequedesceascen$b$copy;
      let sequedesceascen$a0 = sequedesceascen$a0$copy;
      let sequedesceascen$a1 = sequedesceascen$a1$copy;
      let sequedesceascen$a2 = sequedesceascen$a2$copy;
      let sequedesceascen$a3 = sequedesceascen$a3$copy;
      let sequedesceascen$c = true;
      let sequedesceascen$r;
      while (sequedesceascen$c) {
        if (sequedesceascen$b === 0) {
          const v = sequedesceascen$a0;
          if (v.tag === "Cons") {
            if (v._2.tag === "Cons") {
              if (cmp(v._1)(v._2._1).tag === "GT") {
                sequedesceascen$b = 1;
                sequedesceascen$a0 = v._2._1;
                sequedesceascen$a1 = $List("Cons", v._1, Nil);
                sequedesceascen$a2 = v._2._2;
                continue;
              }
              sequedesceascen$b = 2;
              sequedesceascen$a0 = v._2._1;
              sequedesceascen$a1 = v1 => $List("Cons", v._1, v1);
              sequedesceascen$a2 = v._2._2;
              continue;
            }
            sequedesceascen$c = false;
            sequedesceascen$r = $List("Cons", v, Nil);
            continue;
          }
          sequedesceascen$c = false;
          sequedesceascen$r = $List("Cons", v, Nil);
          continue;
        }
        if (sequedesceascen$b === 1) {
          const a = sequedesceascen$a0, as = sequedesceascen$a1, v = sequedesceascen$a2;
          if (v.tag === "Cons") {
            if (cmp(a)(v._1).tag === "GT") {
              sequedesceascen$b = 1;
              sequedesceascen$a0 = v._1;
              sequedesceascen$a1 = $List("Cons", a, as);
              sequedesceascen$a2 = v._2;
              continue;
            }
            sequedesceascen$c = false;
            sequedesceascen$r = $List("Cons", $List("Cons", a, as), sequences(v));
            continue;
          }
          sequedesceascen$c = false;
          sequedesceascen$r = $List("Cons", $List("Cons", a, as), sequences(v));
          continue;
        }
        if (sequedesceascen$b === 2) {
          const a = sequedesceascen$a0, as = sequedesceascen$a1, v = sequedesceascen$a2;
          if (v.tag === "Cons") {
            if (
              (() => {
                const $8 = cmp(a)(v._1);
                return $8.tag === "LT" || !($8.tag === "GT");
              })()
            ) {
              sequedesceascen$b = 2;
              sequedesceascen$a0 = v._1;
              sequedesceascen$a1 = ys => as($List("Cons", a, ys));
              sequedesceascen$a2 = v._2;
              continue;
            }
            sequedesceascen$c = false;
            sequedesceascen$r = $List("Cons", as($List("Cons", a, Nil)), sequences(v));
            continue;
          }
          sequedesceascen$c = false;
          sequedesceascen$r = $List("Cons", as($List("Cons", a, Nil)), sequences(v));
          continue;
        }
      };
      return sequedesceascen$r;
    };
    const sequences = /* #__PURE__ */ sequedesceascen(0);
    const descending = /* #__PURE__ */ sequedesceascen(1);
    const ascending = /* #__PURE__ */ sequedesceascen(2);
    return x => mergeAll(sequences(x));
  };
  sortBy(sortingFunction)(data);

Issue in codegen for effect loops results in unintended fallthrough

Given the input:

test :: Effect (Maybe (Array String)) -> Effect Unit
test eff = do
  res <- eff
  case res of
    Nothing ->
      pure unit
    Just as ->
      foreachE as \a ->
        Console.log a

This generates:

const test = eff => () => {
  const res = eff();
  if (res.tag === "Nothing") { return; }
  if (res.tag === "Just") {
    for (const a of res._1) {
      Effect$dConsole.log(a)();
    }
  }
  $runtime.fail();
};

Which has an unintended fallthrough to the call to fail. Without some of the syntactic optimizations, this generates:

const test = eff => () => {
  const res = eff();
  if (res.tag === "Nothing") { return; }
  if (res.tag === "Just") {
    return (() => {
      for (const a of res._1) {
        Effect$dConsole.log(a)();
      }
    })();
  }
  $runtime.fail();
};

Which correctly returns, but is unnecessary. There is a rewrite which inlines this block with in the if "true" branch, which is incorrect. Instead we should inline the loop block, but then tack on a return after.

Add more inlining directives from across core.

We currently ship default directives for Prelude which encourage a lot of the expected optimizations. We should assess if there are more things in core that would benefit users by being in the default set.

Currently I only want to consider core packages since those packages are unlikely to ship directive exports themselves.

Inline method calls via a directive

Hi,

First of all, thank you for this great work on purescript-backed-optimizer.

I thought about this proposal but I'm not very familiar with purescript-backend-es so it might be a bad idea.

My problematic is that purescript-backend-es can inline Purescript functions but cannot inline FFI functions (written in JS).
But, very often (in my experience), a FFI function in Purescript is just a call of a method in JS
For example:

foreign import someFuncImpl :: EffectFn4 ObjType Val1Type Val2Type Val3Type RetType
export function someFuncImpl(obj, val1, val2, val2) {
  return obj.someMethod(val1, val2, val3)
}

Would it be possible to inline this function by adding a (new) directive like

-- @inlineMethod Example.someFuncImpl "someMethod" arity=3

Unpack TCO loops

It would be nice to unpack TCO loops, such that TCO arguments that are always known constructors don't need allocations. This would give us something comparable to call-pattern-specialization in the TCO pass. We essentially already do this for mutually recursive TCO bindings, as a special case.

It doesn't strictly need to be a TCO pass, as it would obviously be nice to have this even when the backend doesn't need TCO. It just seems like a straightforward extension for now.

Additionally, by doing it as part of codegen, we can avoid an explosion of specialized bindings (one for every specialization of constructors).

createProcess: posix_spawnp: does not exist

In purescript-parsing I created a spago-backend-es.dhall like so

./spago-dev.dhall // { backend = "purs-backend-es build" }

and ran

spago -x spago-backend-es.dhall test

and it errored out with:

...
[274 of 276] Building Type.Function
[275 of 276] Building Type.Row
[276 of 276] Building Type.Prelude
spago: purs-backend-es build: createProcess: posix_spawnp: does not exist (No such file or directory)

I'm investigating. Versions:

$ purs-backend-es --version
v1.1.0
$ purs --version
0.15.4
$ spago --version
0.20.9
$ node --version
v18.2.0

TCO can fail to trigger on Boolean yielding branches

Given a TCO function like:

go a = if f a then go (a + 1) else false

The optimizer will simplify this to the equivalent of

go a = f a && go (a + 1)

Which TCO doesn't understand. We should either not simplify this case or teach TCO to understand this. If we choose to not simplify this case, we should move this simplification rule to the JS backend after TCO. It's difficult to simplify this conditionally on the recursive call, because bottom-up we don't have the context to know that go is recursive unless we thread the environment though to build, which is not ideal.

@MonoidMusician @f-f Do you know if erl and scheme will eliminate the boolean and as a tail call? If not, then I'll probably move this rule to JS.

Case expressions with partial record binders incomplete

In case expressions with partial record binders, not all cases are implemented.

This code:

test1 :: { a :: Int, b :: Int } -> Int
test1 = case _ of
  { a } | a > 0 -> a
  { b } | b > 0 -> b
  _ -> 0

leads to this implementation:

const test1 = v => {
  if (v.a > 0) { return v.a; }
  return 0;
};

The second case is ommited.

Correct would be, of course:

const test1 = v => {
  if (v.a > 0) { return v.a; }
  if (v.b > 0) { return v.b; }
  return 0;
};

More examples:

test2 :: { a :: Int, b :: Int } -> Int
test2 = case _ of
  { a } | a > 0 -> a
  { a: _, b } | b > 0 -> b
  _ -> 0

test3 :: { a :: Int, b :: Int } -> Int
test3 = case _ of
  { a, b: _ } | a > 0 -> a
  { b } | b > 0 -> b
  _ -> 0

test4 :: { a :: Int, b :: Int } -> Int
test4 = case _ of
  { a, b: _ } | a > 0 -> a
  { a: _, b } | b > 0 -> b
  _ -> 0

test5 :: { a :: Int, b :: Int } -> Int
test5 = case _ of
  { a, b }
    | a > 0 -> a
    | b > 0 -> b
  _ -> 0

All 5 tests should lead to the same implemetation, but only test4 and test5 get compiled correctly.

Build with source maps?

When running spago -x prod.dhall build --purs-args '-g sourcemaps' the error below is thrown:

[error] Can't pass `--codegen` option to build when using a backend
[error] Hint: No need to pass `--codegen corefn` explicitly when using the `backend` option.
[error] Remove the argument to solve the error

Is it possible to generate source maps when using this backend?

prod.dhall contains

./spago.dhall // { backend = "pnpm purs-backend-es build" }

Configurable inlining heuristics

It would be useful to support a configurable level of heuristics, such that a user could tune down inlining (prioritizing bundle size). Architecturally, this means that we would need to support customization of our various heuristic guards, which are currently hard coded.

Add inline signatures for parameterized dictionaries

Currently, it's difficult to anticipate how inlining annotations should work when you instance dictionary takes arguments:

instance myFooBind :: Monoid m => Bind (MyFoo r)
  bind ...

There's no way to attach an inlining annotation to the bind method of this instance. With the compiler's CSE pass, and trivial inlining of the bind dispatch, this will get hoisted to a top-level:

const bind1 = (() => MyFoo.myFooBind(Data$dUnit.unit).bind)();

Prompting some to want to annotate bind1, which is a compiler generated binding. This is not ideal.

Right now, we currently only support annotations referenced by top-level name, or top-level name and accessor. We could also support top-level name, application spine, and accessor, which would let us reference this pattern, and thus bind1 could receive the expected inline annotation without having to reference the compiler generated name for that binding.

Propagate constructor refinements in branches

Given code like:

foo expr = case expr of
  Baz _ -> 1
  Qux _ -> 2

bar expr = case expr of
  Baz _ -> 1 + foo expr
  Qux _ -> 2 + foo expr

Where foo is inline always, one would hope that the branches would fuse together, yielding an optimized:

bar expr = case expr of
  Baz _ -> 2
  Qux _ -> 4

But this doesn't currently happen, since expr is opaque. To make this work we would need to propagate refinement information in each branch on the opaque term, saying that in the Baz branch, any subsequent OpIsTag operation on expr can be statically compared against Baz.

One way to do this would be to change SemConditional. Currently it is:

data SemConditional a = SemConditional (Lazy a) (Lazy a)

Which means that the branch is completely closed wrt evaluation, and so can't admit any new refinement information. We could change that so the branch is a function instead, taking some refinement:

data SemConditional a = SemConditional (Lazy a) (Refinement -> a)

I'm not sure if this information should just be tracked in a Map in the Env, or if the locals could potentially be updated in such a way that derefing the binding in that branch can yield a term that fits the refinement.

`unsafeThaw` inlining breaks Array.length checks with new SemRef handling.

Noticed in codegen of [email protected].

There's a backend specific inlining rule for unsafeThaw which causes it to inline into a call to pure.

data_array_st_unsafeThaw = Tuple (qualified "Data.Array.ST" "unsafeThaw") unsafeSTCoerce

This interacts poorly with the new SemRef tracking we added, because the optimizer now considers something like:

example = ST.run
  arr <- unsafeThaw [1, 2, 3]
  ...

To be a normal immutable array binding for the purposes of analysis. arr here is known to have a length of 3. This breaks Data.Array.nubBy, which uses the equivalent of Array.length <$> unsafeFreeze arr in a loop.
https://github.com/purescript/purescript-arrays/blob/72861214e541f3e072041d193f32ce66af9a456b/src/Data/Array.purs#L1121

The optimizer will treat that call to Array.length as a known constant even though it's changing.

It's arguable that this is within the realm of unsafety. This function is using a lot of unsafeNess, after all. But I think since unsafeThaw is a foreign implementation, this is on us. If unsafeThaw had been implemented in pure PureScript as unsafeCoerce then I would consider this a bug in arrays.

This doesn't break nubBy on 7.2.1 due to other issues, specifically, last no longer inlines.

Add tracer to optimizer to see how original expression is transformed to optimized expression

General Problem

Currently, we don't have a way of investigating the exact steps the optimizer took to convert some original expression to its optimized form. As a result, it's harder for us to know how to improve the heuristics (#10) or how one change propagates through to some other change. Similarly, when one is trying to optimize a specific expression and it's not producing what is expected, it's hard to know why. Ideally, there would be an option in this project that would enable such tracing.

Design Questions

  1. In adding a tracer feature, should this feature be exposed by adding it to an existing command (e.g. build/bundle)? Or should it be its own command (e.g. purs-backend-es trace)? I assume the latter for a few reasons:
    • I imagine there will be a lot of different configuration values defined via CLI args. A dev who calls purs-backend-es build likely doesn't care about that as the dev is just trying to produce an optimized build. Moreover, the flags/args may spam the docs for the build command, which would otherwise remain simple if these two are kept separate.
    • A trace command doesn't need to produce any build output. If anything, the trace is the output of the command.

Potential User Workflows

There's a few workflows that this feature could support.

  1. For a single top-level identifier, allow a user to see the original expression, the optimized expression, and the steps taken to optimize it. In other words, is this identifier being optimized as much as it could be? Or are there optimizations not firing here that should be firing?
  2. For all or some select set of identifiers (host identifier) using a specific top-level identifier (target identifier), allow a user to see the original expression, the optimized expression, and the steps taken to optimize it for each one of the host identifiers. In other words, where does the target identifier get used in host identifiers, does the target identifier get inlined, why or why not, and how well does the target identifier inline?
  3. For all type class instances of a given type class, see how how well they optimize (1 above) or how well they inline (2 above) as this is often where inlining would be useful.
  4. For all data constructor of a given type, same as 3 above.
  5. For a specific let/where binding within a top-level identifier, see how how well they optimize and/or how well they inline into their usage site within the top-level identifier.

Implementing tracing using 'best practices'

Tracing at its heart is a form of logging. To prevent spamming this codebase with a bunch of log statements that force us to pass in additional context so as to properly format/display the exact String there, I propose we use Contravariant Logging (explained in "co-log: Composable Contravariant Combinatorial Comonadic Configurable Convenient Logging") via the library purescript-logging to achieve this goal. I read through the post yesterday and wrote myself a cheat sheet of the various functions and my current understanding of their usages.

The general idea of Contravariant Logging has similarities to the "Imperative Shell, Functional Core" idea. In the "Functional Core", one passes in to a callback function the values that make sense within that particular context. In other words, one does not pass the context into functions just for logging. Rather, functions pass in the values to the callback, and the callback adds context over time. As one moves from the "Functional Core" towards the "Imperative Shell", additional context is added to the callback's input function via cmap. divide logs a large amount of information by splitting it into smaller pieces and logging each separately. One can decide what to log and what to ignore via choose/cfilter in contexts where it makes sense to do so. Once in the "Imperative Shell", one defines the actual callback that runs logs the now-context-rich value in whatever way they want.

When I traced the control flow from purs-backend-es to where optimize is called, it took the following steps. Along the way, I note where we can add information to the context of the tracer via cmap:

  • purs-backend-es
  • purescript-backend-optimizer
    • buildModules folds through each module via go (where binding), using the accumulated value in the next fold. However, it doesn't currently return any data from the accumulated value.
      • in buildModules's go binding, we could actually run the callback in the line above options.onCodegenModule as we're in a monadic context and can use effects like Console.log.
      • in buildModules's go binding, we could add the module's name to the context via cmap: Record.insert "moduleName" moduleName >$< logger).
    • go calls the pure function, Optimizer.Convert.toBackendModule, on every module.
    • toBackendModule via moduleBindings (where binding) calls Optimizer.Convert.toBackendTopLevelBindingGroups, which is a preprocessing step to determine whether a binding group is recursive or not.
    • toBackendTopLevelBindingGroups calls Optimizer.Convert.toBackendTopLevelBindingGroup, which converts either a recursive or non-recursive top-level binding group.
      • in toBackendTopLevelBindingGroup, we could add to the logger's context the following information:
        • what other identifiers are included in the group: Record.insert "recursiveGroup" (NonEmptyArray.fromArray bindingsInGroup) >$< logger
        • whether the group is recursive or not: either via Record.insert "recursive" isRecursive >$< logger or by deriving that information from the previous one
    • toBackendTopLevelBindingGroup calls Semantics.optimize a single top-level identifier is optimized.
    • Finally, optimize starts a "quote-eval" loop that iteratively optimizes the expression until the analysis indicates that no more rewriting is possible.
    • in optimize, we could log our main values:
      • the original expression before the loop
      • the final expression after the loop
      • a list of steps taken here to go from the original to the optimized expression by logging:
        • the expression at the loop step's start (i.e. before)
        • the expression at the loop step's end (i.e. after)
        • the loop step count

Or put differently, I think the "application-level" type we can log would be:

type OptimizationStep =
  { before :: Expr -- expression before this loop step
  , after :: Expr -- expression after this loop step
  , index :: Int -- which loop step this is
  , rewrite :: Boolean -- whether we will continue the loop
  }

type LogType = 
  { filePath :: FilePath
  , moduleName :: ModuleName
  , identifier :: Ident
  , recursiveGroup :: Maybe (NonEmptyArray Ident)
  , originalExpr :: Expr
  , optimizedExpr :: Expr
  , steps :: List OptimizationStep
  -- these latter two values may require an `analysisOf` call
  -- to determine what other identifiers this one depends on
  , usedLocalIdentifiers :: Array Ident -- identifiers defined within the module
  , usedExternalIdentifiers :: Array (Qualified Ident) -- identifiers defined in an imported module
  }

We could then take the above type and easily configure the following:

  • via cfilter, determine which log events we actually output to the console (e.g. a specific identifier, all identifiers that use a specific identifier, all identifiers from a given module, all identifiers whose associated file path matches some source glob). I imagine these would be determined via CLI flags/args.
  • via different printers, how we print the information and format it before printing to the console. For example,
    • pretty print things via dodo-printer with ansi colors for clear readability
    • rather than printing an optimization step's before expression and after expression, we could show one expression that contains a diff by using purescript-debug.

Minimal example using `Node.FS.Async.stat` fails at runtime

I have a minimal example in a repo here: https://github.com/ptrfrncsmrph/purs-backend-es-bug

The following

main = do
  stat "." case _ of
    Right (Stats stats) -> do
      logShow $ runFn0 stats.isDirectory
    _ -> mempty

fails with

node:internal/fs/utils:416
  return this._checkModeProperty(S_IFDIR);
              ^

TypeError: Cannot read properties of undefined (reading '_checkModeProperty')
    at StatsBase.isDirectory (node:internal/fs/utils:416:15)

I don't know if this would be considered a bug with node-fs?

Runtime error in `Node.FS.Sync.stat` (and likely more in that module)

I was trying to use purescript-benchotron to compare some performance before & after optimization, and I ended up hitting a runtime error. Here's a minimal reproduction:

module Main where

import Prelude

import Effect (Effect)
import Effect.Console (log)
import Node.FS.Stats (isFile)
import Node.FS.Sync (stat)

main :: Effect Unit
main = do
  stats <- stat "spago.dhall"
  if isFile stats then
    log "File."
  else
    log "Not a file."

The error is:

  return s[m]();
             ^

TypeError: s[m] is not a function

Inspecting the generated code, I see

// output-es/Node.FS.Stats/foreign.js
function statsMethod(m, s) {
  return s[m]();
}

which looks fine - however, I also see this suspect line:

  const a$p = (v) => statSync("spago.dhall")();

Indeed node-fs does some unsafeCoerce to get things into effects: https://github.com/purescript-node/purescript-node-fs/blob/v8.1.0/src/Node/FS/Internal.purs#L9-L10

mkEffect :: forall a. (Unit -> a) -> Effect a
mkEffect = unsafeCoerce

I suspect that node-fs should be updated to not do that, but it made me wonder if something in the purs-backend-es printer was off. What do you think?

Data accessors don't add a module dependency

Code like this:

module Snapshot.Import.Open where

import Snapshot.Import.Impl

fst :: Product -> Int
fst (Product x _) = x

where Snapshot.Import.Impl defines a Product data type doesn't add an import into the BackendModule imports.

@natefaubion said:

The issue is that data constructor accessors don’t incur a dependency on the module. It’s a bit of a gray area, but it makes sense to incur the dependency.

Optimize redundant if-else default branching

I'm not sure if this just is an artifact of the case optimization algorithm, or something that appears more generally.

Screen Shot 2023-08-02 at 6 43 41 AM

The case optimization algorithm always produces a complete decision tree, and since we no longer have backtracking semantics, I suspect that it may not be enough to have the case optimization itself merge these branches as it likely introduces a lot of intermediate bindings that rely on the inliner to clean up.

a - (b - c) != a - b - c

test :: Number -> Number -> Number -> Number
test a b c = a - (b - c)

is compiled to

const test = a => b => c => a - b - c;

instead of

const test = a => b => c => a - (b - c);

Use a proper AST for ES codegen

Currently it's just building Docs directly from the backend AST. Predictably, this results in a bunch of ad-hoc constructions (like ESStatement) during codegen. It would be better just to go ahead and use a proper AST.

spawn esbuild ENOENT

> pnpm exec purs-backend-es bundle-module --no-build

node:events:497
    throw er; // Unhandled 'error' event
    ^

Error: spawn esbuild ENOENT
  at ChildProcess._handle.onexit (node:internal/child_process:286:19)
  at onErrorNT (node:internal/child_process:484:16)
  at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
  at ChildProcess._handle.onexit (node:internal/child_process:292:12)
  at onErrorNT (node:internal/child_process:484:16)
  at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn esbuild',
path: 'esbuild',
spawnargs: [
  '--platform=browser',
  '--format=esm',
  '--outfile=index.js',
  '--bundle',
  'C:\\Users\\John\\Documents\\purescript-2\\output-es\\Main\\index.js'
]
}
> pnpm ls

dependencies:
esbuild 0.20.2
purescript 0.15.15
purs-backend-es 1.4.2
spago 0.21.0

The same with npm install and npm instal --global.
pnpm exec spago build works and output-es folder exists.

`lift1` bug from `purescript-formatters`

As minimal a reproduction as I could come up with:

import Prelude

import Control.Monad.Reader as R
import Control.Monad.State as S
import Control.Monad.Trans.Class (lift)
import Data.Either (Either)
import Parsing as P

-- This can be any `MonadTrans` instance (probably... `State` and `Except` here also reproduce at least).
-- The argument is required for reproduction.
exactLength :: forall a. a -> R.Reader Unit Unit
exactLength _ = lift (pure unit)

unformatCommandParser :: P.ParserT String (S.State Unit) Unit
unformatCommandParser = do
  _ ← pure exactLength
  lift (pure unit)

boom :: Either P.ParseError Unit
boom = S.evalState (P.runParserT "" unformatCommandParser) unit

Error:

TypeError: lift1(...) is not a function
    at file:///W:/.../Test.Main/index.js:13:122
    at go (file:///W:/.../Parsing/index.js:160:20)
    at file:///W:/.../Control.Monad.State.Trans/index.js:84:62
    at file:///W:/.../Control.Monad.Rec.Class/index.js:145:20
    at file:///W:/.../Control.Monad.State.Trans/index.js:88:12
    at file:///W:/.../Control.Monad.State.Trans/index.js:19:119
    at file:///W:/.../Test.Main/index.js:14:172
    at file:///W:/.../Test.Main/index.js:14:193
    at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
    at async Promise.all (index 0)

Generated source:

import * as $runtime from "../runtime.js";
import * as Control$dMonad$dRec$dClass from "../Control.Monad.Rec.Class/index.js";
import * as Control$dMonad$dState$dTrans from "../Control.Monad.State.Trans/index.js";
import * as Data$dIdentity from "../Data.Identity/index.js";
import * as Data$dUnit from "../Data.Unit/index.js";
import * as Parsing from "../Parsing/index.js";
const lift1 = /* #__PURE__ */ (() => {
  const map = Control$dMonad$dState$dTrans.bindStateT(Data$dIdentity.monadIdentity).Apply0().Functor0().map;
  return m => (state1, v, lift$p, v1, done) => lift$p(map(a => v2 => done(state1, a))(m));
})();
const pure1 = /* #__PURE__ */ (() => Control$dMonad$dState$dTrans.applicativeStateT(Data$dIdentity.monadIdentity).pure)();
const exactLength = v => v$1 => Data$dUnit.unit;
const unformatCommandParser = (state1, more, lift1, $$throw, done) => more(v1 => more(v2 => lift1(pure1(Data$dUnit.unit))(state1, more, lift1, $$throw, done)));
const boom = /* #__PURE__ */ (() => Parsing.runParserT(Control$dMonad$dState$dTrans.monadRecStateT(Control$dMonad$dRec$dClass.monadRecIdentity))("")(unformatCommandParser)(Data$dUnit.unit)._1)();
export {boom, exactLength, lift1, pure1, unformatCommandParser};

It may be something to do with parsing, since if you change the outer transformer of unformatCommandParser to StateT, ReaderT, ExceptT it doesn't error. There are no coercions or parsing-specific FFI in the relevant code in parsing though, so I'm blaming the optimizer now. 😉

Single quotation mark produced in codgen output

I took this on a spin today and there's a small issue - for some reason, a double-quote is being emitted in the output in the middle of otherwise-fine codegen:

Screen Shot 2022-08-25 at 1 06 24 PM

Here it is copied-and pasted from output-es

        return Rito$dMesh.mesh$p({mesh: makeBasic.threeDI.mesh})(Rito$dGeometries$dBox.box((() => {
          const $16 = ConvertableOptions.convertRecordOptionsCons(ConvertableOptions.convertRecordOptionsNil)(Rito$dGeometries$dBox.convertOptionBoxOptions"b1)()()()({
            reflectSymbol: () => "box"
          });

(the spurious quotation mark is after convertOptionBoxOptions and before b1

In output, this is:

                    })(Rito_Geometries_Box.box(Rito_Geometries_Box.initialBoxRecord(ConvertableOptions.convertOptionsWithDefaultsRecord(ConvertableOptions.convertOptionsRecord()(ConvertableOptions.convertRecordOptionsCons(ConvertableOptions.convertRecordOptionsNil)(Rito_Geometries_Box["convertOptionBoxOptions$34b1"])()()()({
                        reflectSymbol: function () {
                            return "box";
                        }

I'm pretty sure if I named this instances manually this'd go away. Hopefully this report is helpful!

Add inline annotation for data types

One of the more powerful optimizations is case-of-case, which internally translates to the shouldDistributeBranches heuristic and the RewriteDistBranchesLet rewrite constraint. The problem is that it can result in exponential code explosion, so the heuristic is very conservative, with a hard cap on continuation size. The heuristic is mainly driven by the ResultTerm analysis, which can verify that all branches yield a known literal term. If as part of the analysis we noted which data type it resulted in (the constructor's ProperName field), we could support inlining annotations that force this optimization to apply whenever possible. Thus we could keep the heuristic conservative, but add annotations for things like Generics, so that the Generics intermediate representation more reliably fuses away.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.