Giter Club home page Giter Club logo

proposal-record-tuple's Introduction

JavaScript Records & Tuples Proposal

Authors:

  • Robin Ricard (Bloomberg)
  • Rick Button (Bloomberg)
  • Nicolò Ribaudo (Babel)

Champions:

  • Robin Ricard (Bloomberg)
  • Rick Button (Bloomberg)

Advisors

  • Philipp Dunkel (Bloomberg)
  • Dan Ehrenberg (Bloomberg)
  • Maxwell Heiber

Stage: 2

Content

  1. Overview
  2. Examples
  3. Syntax
  4. Equality
  5. The object model
  6. Record and Tuple standard library support
  7. Rationale

Overview

This proposal introduces two new deeply immutable data structures to JavaScript:

  • Record, a deeply immutable Object-like structure #{ x: 1, y: 2 }
  • Tuple, a deeply immutable Array-like structure #[1, 2, 3, 4]

Records and Tuples can only contain primitives and other Records and Tuples. You could think of Records and Tuples as "compound primitives". By being thoroughly based on primitives, not objects, Records and Tuples are deeply immutable.

Records and Tuples support comfortable idioms for construction, manipulation and use, similar to working with objects and Arrays. They are compared deeply by their contents, rather than by their identity.

JavaScript engines may perform certain optimizations on construction, manipulation and comparison of Records and Tuples, analogous to the way Strings are often implemented in JS engines. (It should be understood that these optimizations are not guaranteed.)

Records and Tuples aim to be usable and understood with external typesystem supersets such as TypeScript or Flow.

Prior work on immutable data structures in JavaScript

Today, userland libraries implement similar concepts, such as Immutable.js. Also a previous proposal has been attempted but abandoned because of the complexity of the proposal and lack of sufficient use cases.

This new proposal is still inspired by this previous proposal but introduces some significant changes: Record and Tuples are now deeply immutable. This property is fundamentally based on the observation that, in large projects, the risk of mixing immutable and mutable data structures grows as the amount of data being stored and passed around grows as well so you'll be more likely handling large record & tuple structures. This can introduce hard-to-find bugs.

As a built-in, deeply immutable data structure, this proposal also offers a few usability advantages compared to userland libraries:

  • Records and Tuples are easily introspectable in a debugger, while library provided immutable types are often hard to inspect as you have to inspect through data structure details.
  • Because they're accessed through typical object and array idioms, no additional branching is needed in order to write a generic library that consumes both immutable and JS objects; with user libraries, method calls may be needed just in the immutable case.
  • We avoid cases where developers may expensively convert between regular JS objects and immutable structures, by making it easier to just always use the immutable ones.

Immer is a notable approach to immutable data structures, and prescribes a pattern for manipulation through producers and reducers. It is not providing immutable data types however, as it generates frozen objects. This same pattern can be adapted to the structures defined in this proposal in addition to frozen objects.

Deep equality as defined in user libraries can vary significantly, in part due to possible references to mutable objects. By drawing a hard line about only deeply containing primitives, Records and Tuples, and recursing through the entire structure, this proposal defines simple, unified semantics for comparisons.

Examples

Record

const proposal = #{
  id: 1234,
  title: "Record & Tuple proposal",
  contents: `...`,
  // tuples are primitive types so you can put them in records:
  keywords: #["ecma", "tc39", "proposal", "record", "tuple"],
};

// Accessing keys like you would with objects!
console.log(proposal.title); // Record & Tuple proposal
console.log(proposal.keywords[1]); // tc39

// Spread like objects!
const proposal2 = #{
  ...proposal,
  title: "Stage 2: Record & Tuple",
};
console.log(proposal2.title); // Stage 2: Record & Tuple
console.log(proposal2.keywords[1]); // tc39

// Object functions work on Records:
console.log(Object.keys(proposal)); // ["contents", "id", "keywords", "title"]

Open in playground

Functions can handle Records and Objects in generally the same way:

const ship1 = #{ x: 1, y: 2 };
// ship2 is an ordinary object:
const ship2 = { x: -1, y: 3 };

function move(start, deltaX, deltaY) {
  // we always return a record after moving
  return #{
    x: start.x + deltaX,
    y: start.y + deltaY,
  };
}

const ship1Moved = move(ship1, 1, 0);
// passing an ordinary object to move() still works:
const ship2Moved = move(ship2, 3, -1);

console.log(ship1Moved === ship2Moved); // true
// ship1 and ship2 have the same coordinates after moving

Open in playground

See more examples here.

Tuple

const measures = #[42, 12, 67, "measure error: foo happened"];

// Accessing indices like you would with arrays!
console.log(measures[0]); // 42
console.log(measures[3]); // measure error: foo happened

// Slice and spread like arrays!
const correctedMeasures = #[
  ...measures.slice(0, measures.length - 1),
  -1
];
console.log(correctedMeasures[0]); // 42
console.log(correctedMeasures[3]); // -1

// or use the .with() shorthand for the same result:
const correctedMeasures2 = measures.with(3, -1);
console.log(correctedMeasures2[0]); // 42
console.log(correctedMeasures2[3]); // -1

// Tuples support methods similar to Arrays
console.log(correctedMeasures2.map(x => x + 1)); // #[43, 13, 68, 0]

Open in playground

Similarly than with records, we can treat tuples as array-like:

const ship1 = #[1, 2];
// ship2 is an array:
const ship2 = [-1, 3];

function move(start, deltaX, deltaY) {
  // we always return a tuple after moving
  return #[
    start[0] + deltaX,
    start[1] + deltaY,
  ];
}

const ship1Moved = move(ship1, 1, 0);
// passing an array to move() still works:
const ship2Moved = move(ship2, 3, -1);

console.log(ship1Moved === ship2Moved); // true
// ship1 and ship2 have the same coordinates after moving

Open in playground

See more examples here.

Forbidden cases

As stated before Record & Tuple are deeply immutable: attempting to insert an object in them will result in a TypeError:

const instance = new MyClass();
const constContainer = #{
    instance: instance
};
// TypeError: Record literals may only contain primitives, Records and Tuples

const tuple = #[1, 2, 3];

tuple.map(x => new MyClass(x));
// TypeError: Callback to Tuple.prototype.map may only return primitives, Records or Tuples

// The following should work:
Array.from(tuple).map(x => new MyClass(x))

Syntax

This defines the new pieces of syntax being added to the language with this proposal.

We define a record or tuple expression by using the # modifier in front of otherwise normal object or array expressions.

Examples

#{}
#{ a: 1, b: 2 }
#{ a: 1, b: #[2, 3, #{ c: 4 }] }
#[]
#[1, 2]
#[1, 2, #{ a: 3 }]

Syntax errors

Holes are prevented in syntax, unlike Arrays, which allow holes. See issue #84 for more discussion.

const x = #[,]; // SyntaxError, holes are disallowed by syntax

Using the __proto__ identifier as a property is prevented in syntax. See issue #46 for more discussion.

const x = #{ __proto__: foo }; // SyntaxError, __proto__ identifier prevented by syntax

const y = #{ ["__proto__"]: foo }; // valid, creates a record with a "__proto__" property.

Concise methods are disallowed in Record syntax.

#{ method() { } }  // SyntaxError

Runtime errors

Records may only have String keys, not Symbol keys, due to the issues described in #15. Creating a Record with a Symbol key is a TypeError.

const record = #{ [Symbol()]: #{} };
// TypeError: Record may only have string as keys

Records and Tuples may only contain primitives and other Records and Tuples. Attempting to create a Record or Tuple that contains an Object (null is not an object) or a Function throws a TypeError.

const obj = {};
const record = #{ prop: obj }; // TypeError: Record may only contain primitive values

Equality

Equality of Records and Tuples works like that of other JS primitive types like Boolean and String values, comparing by contents, not identity:

assert(#{ a: 1 } === #{ a: 1 });
assert(#[1, 2] === #[1, 2]);

This is distinct from how equality works for JS objects: comparison of objects will observe that each object is distinct:

assert({ a: 1 } !== { a: 1 });
assert(Object(#{ a: 1 }) !== Object(#{ a: 1 }));
assert(Object(#[1, 2]) !== Object(#[1, 2]));

Insertion order of record keys does not affect equality of records, because there's no way to observe the original ordering of the keys, as they're implicitly sorted:

assert(#{ a: 1, b: 2 } === #{ b: 2, a: 1 });

Object.keys(#{ a: 1, b: 2 })  // ["a", "b"]
Object.keys(#{ b: 2, a: 1 })  // ["a", "b"]

If their structure and contents are deeply identical, then Record and Tuple values considered equal according to all of the equality operations: Object.is, ==, ===, and the internal SameValueZero algorithm (used for comparing keys of Maps and Sets). They differ in terms of how -0 is treated:

  • Object.is treats -0 and 0 as unequal
  • ==, === and SameValueZero treat -0 with 0 as equal

Note that == and === are more direct about other kinds of values nested in Records and Tuples--returning true if and only if the contents are identical (with the exception of 0/-0). This directness has implications for NaN as well as comparisons across types. See examples below.

See further discussion in #65.

assert(#{ a:  1 } === #{ a: 1 });
assert(#[1] === #[1]);

assert(#{ a: -0 } === #{ a: +0 });
assert(#[-0] === #[+0]);
assert(#{ a: NaN } === #{ a: NaN });
assert(#[NaN] === #[NaN]);

assert(#{ a: -0 } == #{ a: +0 });
assert(#[-0] == #[+0]);
assert(#{ a: NaN } == #{ a: NaN });
assert(#[NaN] == #[NaN]);
assert(#[1] != #["1"]);

assert(!Object.is(#{ a: -0 }, #{ a: +0 }));
assert(!Object.is(#[-0], #[+0]));
assert(Object.is(#{ a: NaN }, #{ a: NaN }));
assert(Object.is(#[NaN], #[NaN]));

// Map keys are compared with the SameValueZero algorithm
assert(new Map().set(#{ a: 1 }, true).get(#{ a: 1 }));
assert(new Map().set(#[1], true).get(#[1]));
assert(new Map().set(#[-0], true).get(#[0]));

The object model of Record and Tuple

In general, you can treat Records like objects. For example, the Object namespace and the in operator work with Records.

const keysArr = Object.keys(#{ a: 1, b: 2 }); // returns the array ["a", "b"]
assert(keysArr[0] === "a");
assert(keysArr[1] === "b");
assert(keysArr !== #["a", "b"]);
assert("a" in #{ a: 1, b: 2 });

Advanced internal details: Record and Tuple wrapper objects

JS developers will typically not have to think about Record and Tuple wrapper objects, but they're a key part of how Records and Tuples work "under the hood" in the JavaScript specification.

Accessing of a Record or Tuple via . or [] follows the typical GetValue semantics, which implicitly converts to an instance of the corresponding wrapper type. You can also do the conversion explicitly through Object():

  • Object(record) creates a Record wrapper object
  • Object(tuple) creates a Tuple wrapper object

(One could imagine that new Record or new Tuple could create these wrappers, like new Number and new String do, but Records and Tuples follow the newer convention set by Symbol and BigInt, making these cases throw, as it's not the path we want to encourage programmers to take.)

Record and Tuple wrapper objects have all of their own properties with the attributes writable: false, enumerable: true, configurable: false. The wrapper object is not extensible. All put together, they behave as frozen objects. This is different from existing wrapper objects in JavaScript, but is necessary to give the kinds of errors you'd expect from ordinary manipulations on Records and Tuples.

An instance of Record has the same keys and values as the underlying record value. The __proto__ of each of these Record wrapper objects is null (discussion: #71).

An instance of Tuple has keys that are integers corresponding to each index in the underlying tuple value. The value for each of these keys is the corresponding value in the original tuple. In addition, there is a non-enumerable length key. Overall, these properties match those of the String wrapper object. That is, Object.getOwnPropertyDescriptors(Object(#["a", "b"])) and Object.getOwnPropertyDescriptors(Object("ab")) each return an object that looks like this:

{
  "0": {
    "value": "a",
    "writable": false,
    "enumerable": true,
    "configurable": false
  },
  "1": {
    "value": "b",
    "writable": false,
    "enumerable": true,
    "configurable": false
  },
  "length": {
    "value": 2,
    "writable": false,
    "enumerable": false,
    "configurable": false
  }
}

The __proto__ of Tuple wrapper objects is Tuple.prototype. Note that, if you're working across different JavaScript global objects ("Realms"), the Tuple.prototype is selected based on the current Realm when the Object conversion is performed, similarly to how the .prototype of other primitives behaves -- it's not attached to the Tuple value itself. Tuple.prototype has various methods on it, analogous to Arrays.

For integrity, out-of-bounds numerical indexing on Tuples returns undefined, rather than forwarding up through the prototype chain, as with TypedArrays. Lookup of non-numerical property keys forwards up to Tuple.prototype, which is important to find their Array-like methods.

Record and Tuple standard library support

Tuple values have functionality broadly analogous to Array. Similarly, Record values are supported by different Object static methods.

assert.deepEqual(Object.keys(#{ a: 1, b: 2 }), ["a", "b"]);
assert(#[1, 2, 3].map(x => x * 2), #[2, 4, 6]);

See the appendix to learn more about the Record & Tuple namespaces.

Converting from Objects and Arrays

You can convert structures using Record(), Tuple() (with the spread operator), Record.fromEntries() or Tuple.from():

const record = Record({ a: 1, b: 2, c: 3 });
const record2 = Record.fromEntries([["a", 1], #["b", 2], { 0: 'c', 1: 3 }]); // note that any iterable of entries will work
const tuple = Tuple(...[1, 2, 3]);
const tuple2 = Tuple.from([1, 2, 3]); // note that an iterable will also work

assert(record === #{ a: 1, b: 2, c: 3 });
assert(tuple === #[1, 2, 3]);
Record({ a: {} }); // TypeError: Can't convert Object with a non-const value to Record
Tuple.from([{}, {} , {}]); // TypeError: Can't convert Iterable with a non-const value to Tuple

Note that Record(), Tuple(), Record.fromEntries() and Tuple.from() expect collections consisting of Records, Tuples or other primitives (such as Numbers, Strings, etc). Nested object references would cause a TypeError. It's up to the caller to convert inner structures in whatever way is appropriate for the application.

Note: The current draft proposal does not contain recursive conversion routines, only shallow ones. See discussion in #122

Iteration protocol

Like Arrays, Tuples are iterable.

const tuple = #[1, 2];

// output is:
// 1
// 2
for (const o of tuple) { console.log(o); }

Similarly to Objects, Records are only iterable in conjunction with APIs like Object.entries.

const record = #{ a: 1, b: 2 };

// TypeError: record is not iterable
for (const o of record) { console.log(o); }

// Object.entries can be used to iterate over Records, just like for Objects
// output is:
// a
// b
for (const [key, value] of Object.entries(record)) { console.log(key) }

JSON.stringify

  • The behavior of JSON.stringify(record) is equivalent to calling JSON.stringify on the object resulting from recursively converting the record to an object that contains no records or tuples.
  • The behavior of JSON.stringify(tuple) is equivalent to calling JSON.stringify on the array resulting from recursively converting the tuple to an array that contains no records or tuples.
JSON.stringify(#{ a: #[1, 2, 3] }); // '{"a":[1,2,3]}'
JSON.stringify(#[true, #{ a: #[1, 2, 3] }]); // '[true,{"a":[1,2,3]}]'

JSON.parseImmutable

Please see https://github.com/tc39/proposal-json-parseimmutable

Tuple.prototype

Tuple supports instance methods similar to Array with a few changes:

  • The mechanics of Tuple and Array methods are a bit different; Array methods generally depend on being able to incrementally modify the Array, and are built for subclassing, neither of which would apply for Tuples.
  • Operations which mutate the Array are not supported. For example, there is no Tuple.prototype.push method.
  • Tuples include the methods introduced by the Change Array by copy proposal, such as Tuple.prototype.withAt.

The appendix contains a full description of Tuple's prototype.

typeof

typeof identifies Records and Tuples as distinct types:

assert(typeof #{ a: 1 } === "record");
assert(typeof #[1, 2]   === "tuple");

Usage in {Map|Set|WeakMap|WeakSet}

It is possible to use a Record or Tuple as a key in a Map, and as a value in a Set. When using a Record or Tuple here, they are compared by value.

It is not possible to use a Record or Tuple as a key in a WeakMap or as a value in a WeakSet, because Records and Tuples are not Objects, and their lifetime is not observable.

Examples

Map

const record1 = #{ a: 1, b: 2 };
const record2 = #{ a: 1, b: 2 };

const map = new Map();
map.set(record1, true);
assert(map.get(record2));

Set

const record1 = #{ a: 1, b: 2 };
const record2 = #{ a: 1, b: 2 };

const set = new Set();
set.add(record1);
set.add(record2);
assert(set.size === 1);

WeakMap and WeakSet

const record = #{ a: 1, b: 2 };
const weakMap = new WeakMap();

// TypeError: Can't use a Record as the key in a WeakMap
weakMap.set(record, true);

WeakSet

const record = #{ a: 1, b: 2 };
const weakSet = new WeakSet();

// TypeError: Can't add a Record to a WeakSet
weakSet.add(record);

Rationale

Why introduce new primitive types? Why not just use objects in an immutable data structure library?

One core benefit of the Records and Tuples proposal is that they are compared by their contents, not their identity. At the same time, === in JavaScript on objects has very clear, consistent semantics: to compare the objects by identity. Making Records and Tuples primitives enables comparison based on their values.

At a high level, the object/primitive distinction helps form a hard line between the deeply immutable, context-free, identity-free world and the world of mutable objects above it. This category split makes the design and mental model clearer.

An alternative to implementing Record and Tuple as primitives would be to use operator overloading to achieve a similar result, by implementing an overloaded abstract equality (==) operator that deeply compares objects. While this is possible, it doesn't satisfy the full use case, because operator overloading doesn't provide an override for the === operator. We want the strict equality (===) operator to be a reliable check of "identity" for objects and "observable value" (modulo -0/+0/NaN) for primitive types.

Another option is to perform what is called interning: we track globally Record or Tuple objects and if we attempt to create a new one that happens to be identical to an existing Record object, we now reference this existing Record instead of creating a new one. This is essentially what the polyfill does. We're now equating value and identity. This approach creates problems once we extend that behavior across multiple JavaScript contexts and wouldn't give deep immutability by nature and it is particularly slow which would make using Record & Tuple a performance-negative choice.

Will developers be familiar with this new concept?

Record & Tuple is built to interoperate with objects and arrays well: you can read them exactly the same way as you would do with objects and arrays. The main change lies in the deep immutability and the comparison by value instead of identity.

Developers used to manipulating objects in an immutable manner (such as transforming pieces of Redux state) will be able to continue to do the same manipulations they used to do on objects and arrays, this time, with more guarantees.

We are going to do empirical research through interviews and surveys to figure out if this is working out as we think it does.

Why are Record & Tuple not based on .get()/.set() methods like Immutable.js?

If we want to keep access to Record & Tuple similar to Objects and Arrays as described in the previous section, we can't rely on methods to perform that access. Doing so would require us to branch code when trying to create a "generic" function able to take Objects/Arrays/Records/Tuples.

Here is an example function that has support for Immutable.js Records and ordinary objects:

const ProfileRecord = Immutable.Record({
    name: "Anonymous",
    githubHandle: null,
});

const profileObject = {
    name: "Rick Button",
    githubHandle: "rickbutton",
};
const profileRecord = ProfileRecord({
    name: "Robin Ricard",
    githubHandle: "rricard",
});

function getGithubUrl(profile) {
    if (Immutable.Record.isRecord(profile)) {
        return `https://github.com/${
            profile.get("githubHandle")
        }`;
    }
    return `https://github.com/${
        profile.githubHandle
    }`;
}

console.log(getGithubUrl(profileObject)) // https://github.com/rickbutton
console.log(getGithubUrl(profileRecord)) // https://github.com/rricard

This is error-prone as both branches could easily get out of sync over time...

Here is how we would write that function taking Records from this proposal and ordinary objects:

const profileObject = {
  name: "Rick Button",
  githubHandle: "rickbutton",
};
const profileRecord = #{
  name: "Robin Ricard",
  githubHandle: "rricard",
};

function getGithubUrl(profile) {
  return `https://github.com/${
    profile.githubHandle
  }`;
}

console.log(getGithubUrl(profileObject)) // https://github.com/rickbutton
console.log(getGithubUrl(profileRecord)) // https://github.com/rricard

This function support both Objects and Records in a single code-path as well as not forcing the consumer to choose which data structures to use.

Why do we need to support both at the same time anyway? This is primarily to avoid an ecosystem split. Let's say we're using Immutable.js to do our state management but we need to feed our state to a few external libraries that don't support it:

state.jobResult = Immutable.fromJS(
    ExternalLib.processJob(
        state.jobDescription.toJS()
    )
);

Both toJS() and fromJS() can end up being very expensive operations depending on the size of the substructures. An ecosystem split means conversions that, in turn, means possible performance issues.

Why introduce new syntax? Why not just introduce the Record and Tuple globals?

The proposed syntax significantly improves the ergonomics of using Record and Tuple in code. For example:

// with the proposed syntax
const record = #{
  a: #{
    foo: "string",
  },
  b: #{
    bar: 123,
  },
  c: #{
    baz: #{
      hello: #[
        1,
        2,
        3,
      ],
    },
  },
};

// with only the Record/Tuple globals
const record = Record({
  a: Record({
    foo: "string",
  }),
  b: Record({
    bar: 123,
  }),
  c: Record({
    baz: Record({
      hello: Tuple(
        1,
        2,
        3,
      ),
    }),
  }),
});

The proposed syntax is intended to be simpler and easier to understand, because it is intentionally similar to syntax for object and array literals. This takes advantage of the user's existing familiarity with objects and arrays. Additionally, the second example introduces additional temporary object literals, which adds to complexity of the expression.

Why specifically the #{}/#[] syntax? What about an existing or new keyword?

Using a keyword as a prefix to the standard object/array literal syntax presents issues around backwards compatibility. Additionally, re-using existing keywords can introduce ambiguity.

ECMAScript defines a set of reserved keywords that can be used for future extensions to the language. Defining a new keyword that is not already reserved is theoretically possible, but requires significant effort to validate that the new keyword will not likely break backwards compatibility.

Using a reserved keyword makes this process easier, but it is not a perfect solution because there are no reserved keywords that match the "intent" of the feature, other than const. The const keyword is also tricky, because it describes a similar concept (variable reference immutability) while this proposal intends to add new immutable data structures. While immutability is the common thread between these two features, there has been significant community feedback that indicates that using const in both contexts is undesirable.

Instead of using a keyword, {| |} and [||] have been suggested as possible alternatives. Currently, the champion group is leaning towards #[]/#{}, but discussion is ongoing in #10.

Why deep immutability?

The definition of Record & Tuple as compound primitives forces everything in Record & Tuple to not be objects. This comes with some drawbacks (referencing objects becomes harder but is still possible) but also more guarantees to avoid common programming mistakes.

const object = {
   a: {
       foo: "bar",
   },
};
Object.freeze(object);
func(object);

// func is able to mutate object’s keys even if object is frozen

In the above example, we try to create a guarantee of immutability with Object.freeze. Unfortunately, since we did not freeze the object deeply, nothing tells us that object.a has been untouched. With Record & Tuple that constraint is by nature and there is no doubt that the structure is untouched:

const record = #{
   a: #{
       foo: "bar",
   },
};
func(record);
// runtime guarantees that record is entirely unchanged
assert(record.a.foo === "bar");

Finally, deep immutability suppresses the need for a common pattern which consists of deep-cloning objects to keep guarantees:

const clonedObject = JSON.parse(JSON.stringify(object));
func(clonedObject);
// now func can have side effects on clonedObject, object is untouched
// but at what cost?
assert(object.a.foo === "bar");

FAQ

How can I make a Record or Tuple which is based on an existing one, but with one part changed or added?

In general, the spread operator works well for this:

// Add a Record field
let rec = #{ a: 1, x: 5 }
#{ ...rec, b: 2 }  // #{ a: 1, b: 2, x: 5 }

// Change a Record field
#{ ...rec, x: 6 }  // #{ a: 1, x: 6 }

// Append to a Tuple
let tup = #[1, 2, 3];
#[...tup, 4]  // #[1, 2, 3, 4]

// Prepend to a Tuple
#[0, ...tup]  // #[0, 1, 2, 3]

// Prepend and append to a Tuple
#[0, ...tup, 4]  // #[0, 1, 2, 3, 4]

And if you're changing something in a Tuple, the Tuple.prototype.with method works:

// Change a Tuple index
let tup = #[1, 2, 3];
tup.with(1, 500)  // #[1, 500, 3]

Some manipulations of "deep paths" can be a bit awkward. For that, the Deep Path Properties for Records proposal adds additional shorthand syntax to Record literals.

We are developing the deep path properties proposal as a separate follow-on proposal because we don't see it as core to using Records, which work well independently. It's the kind of syntactic addition which would work well to prototype over time in transpilers, and where we have many decision points which don't have to do with Records and Tuples (e.g., how it works with objects).

How does this relate to the Readonly Collections proposal?

We've talked with the Readonly Collections champions, and both groups agree that these are complements:

  • Readonly collections are shallowly immutable and may point to objects; they may be mutated during construction, and read-only views of mutating objects are supported.
  • Records and Tuples are deeply immutable and consist only of primitives.

Neither one is a subset of the other in terms of functionality. At best, they are parallel, just like each proposal is parallel to other collection types in the language.

So, the two champion groups have resolved to ensure that the proposals are in parallel with respect to each other. For example, this proposal adds a new Tuple.prototype.withReversed method. The idea would be to check, during the design process, if this signature would also make sense for read-only Arrays (if those exist): we extracted these new methods to the Change Array by copy proposal, so that we can discuss an API which builds a consistent, shared mental model.

In the current proposal drafts, there aren't any overlapping types for the same kind of data, but both proposals could grow in these directions in the future, and we're trying to think these things through ahead of time. Who knows, some day TC39 could decide to add primitive RecordMap and RecordSet types, as the deeply immutable versions of Set and Map! And these would be in parallel with Readonly Collections types.

Could we have classes whose instances are Records?

TC39 has been long discussing "value types", which would be some kind of class declaration for a primitive type, for several years, on and off. An earlier version of this proposal even made an attempt. This proposal tries to start off simple and minimal, providing just the core structures. The hope is that it could provide the data model for a future proposal for classes.

This proposal is loosely related to a broader set of proposals, including operator overloading and extended numeric literals: These all conspire to provide a way for user-defined types to do the same as BigInt. However, the idea is to add these features if we determine they're independently motivated.

If we had user-defined primitive/value types, then it could make sense to use them in built-in features, such as CSS Typed OM or the Temporal Proposal. However, this is far in the future, if it ever happens; for now, it works well to use objects for these sorts of features.

What's the relationship between this proposal's Record & Tuple and TypeScript's Record & Tuple?

Although both kinds of Records relate to Objects, and both kinds of Tuples relate to Arrays, that's about where the similarity ends.

Records in TypeScript are a generic utility type to represent an object taking a key type matching with a value type. They still represent objects.

Likewise, Tuples in TypeScript are a notation to express types in an array of a limited size (starting with TypeScript 4.0 they have a variadic form). Tuples in TypeScript are a way to express arrays with heterogeneous types. ECMAScript tuples can correspond to TS arrays or TS tuples easily as they can either contain an indefinite number of values of the same type or contain a limited number of values with different types.

TS Records or Tuples are orthogonal features to ECMAScript Records and Tuples and both could be expressed at the same time:

const record: Readonly<Record<string, number>> = #{
  foo: 1,
  bar: 2,
};
const tuple: readonly [number, string] = #[1, "foo"];

What are the performance expectations of these data structures?

This proposal does not make any performance guarantees and does not require specific optimizations in implementations. Based on feedback from implementers, it is expected that they will implement common operations via "linear time" algorithms. However, this proposal does not prevent some classical optimizations for purely functional data structures, including but not limited to:

  • Optimizations for making deep equality checks fast:
    • For returning true quickly, intern ("hash-cons") some data structures
    • For returning false quickly, maintain a hash of the tree of the contents of some structures
  • Optimizations for manipulating data structures
    • In some cases, reuse existing data structures (e.g., when manipulated with object spread), similar to ropes or typical implementations of functional data structures
    • In other cases, as determined by the engine, use a flat representation like existing JavaScript object implementations

These optimizations are analogous to the way that modern JavaScript engines handle string concatenation, with various different internal types of strings. The validity of these optimizations rests on the unobservability of the identity of records and tuples. It's not expected that all engines will act identically with respect to these optimizations, but rather, they will each make decisions about which particular heuristics to use. Before Stage 4 of this proposal, we plan to publish a guide for best practices for cross-engine optimizable use of Records and Tuples, based on the implementation experience that we will have at that point.

Glossary

Record

A new, deeply immutable, compound primitive type data structure, proposed in this document, that is analogous to Object. #{ a: 1, b: 2 }

Tuple

A new, deeply immutable, compound primitive type data structure, proposed in this document, that is analogous to Array. #[1, 2, 3, 4]

Compound primitive types

Values which act like other JavaScript primitives, but are composed of other constituent values. This document proposes the first two compound primitive types: Record and Tuple.

Simple primitive types

String, Number, Boolean, undefined, null, Symbol and BigInt

Primitive types

Things which are either compound or simple primitive types. All primitives in JavaScript share certain properties:

  • They are deeply immutable
  • Comparison is by value, not by identity
  • They are not objects in the object model--object operations lead to exceptions or implicit wrapper creation.

Immutable Data Structure

A data structure that doesn't accept operations that change it internally, but instead has operations that return a new value that is the result of applying that operation to it.

In this proposal Record and Tuple are deeply immutable data structures.

Strict Equality

The operator === is defined with the Strict Equality Comparison algorithm. Strict Equality refers to this particular notion of equality.

Structural Sharing

Structural sharing is a technique used to limit the memory footprint of immutable data structures. In a nutshell, when applying an operation to derive a new version of an immutable structure, structural sharing will attempt to keep most of the internal structure intact and used by both the old and derived versions of that structure. This greatly limits the amount to copy to derive the new structure.

proposal-record-tuple's People

Contributors

acutmore avatar arai-a avatar ascoders avatar bakkot avatar bathos avatar catamorphism avatar dead-claudia avatar dependabot[bot] avatar elektronik2k5 avatar googol avatar icecream17 avatar kidonng avatar lilyruth avatar ljharb avatar lostfictions avatar mheiber avatar michaelficarra avatar midspike avatar mkeoliya avatar nicolo-ribaudo avatar nikeee avatar nkitku avatar rickbutton avatar robpalme avatar rricard avatar styfle avatar tehshrike avatar tiffon avatar tikotzky avatar unadlib avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proposal-record-tuple's Issues

Equality semantics for `-0` and `NaN`

What should each of the following evaluate to?

#[+0] == #[-0];

#[+0] === #[-0];

Object.is(#[+0], #[-0]);

#[NaN] == #[NaN];

#[NaN] === #[NaN];

Object.is(#[NaN], #[NaN]);

(For context, this is non-obvious because +0 === -0 is true, Object.is(+0, -0) is false, NaN === NaN is false, and Object.is(NaN, NaN) is true.)

Personally I lean towards the -0 cases all being false and the NaN cases all being true, so that the unusual equality semantics of -0 and NaN do not propagate to the new kinds of objects being introduced by this proposal.

Create immutable values from object reference

How can a developer achieve that?
If i fetch data from an endpoint, read data from a file, or use a third party lib that returns mutable data, I would want to create an immutable value from the data.
Also since this const obj = {b: 1}; immutableObj with .a = obj; throws an error, how can I change obj to an immutable value, to merge it with immutableObj?

Holes in Tuples

I'm wondering if Holes, in the sense of #[ , , , ] would be allowed in Tuples or forbidden, and if a Hole is allowed, does loading from a hole always evaluate to undefined, or does it result in loading elements from higher up in the prototype?

Consider looking at Java's parallel effort

There has been an ongoing effort to introduce value types to Java since at least 2012. Obviously they have very different constraints, but I expect it's worth reading about their effort anyway.

I believe the current proposal is here, and some links to history here, although note that these focus more on implementation than on how it would actually be used by programmers. There's a bunch of other docs spread out in random places too, like this one from 2014 which covers a lot of details as they were at the time.

This isn't really an actionable issue; I just wanted to point it out as a potential source of inspiration (at least for questions to ask about the design). Feel free to close this once you've read it.

Performance expectations (implementers and JS developers)

When I've talked with JavaScript developers about this feature, they mention expectations of cheap sharing with workers, cheap structural sharing (e.g., with #[...a, ...b] not copying everything), and cheap equality. These are key parts of the motivation for the proposal. Something to investigate early is, can these be implemented realistically in JS engines? We should document the state of our understanding in the README, since it's so core to the proposal's use in reality, even if it doesn't affect specification text.

Regularize the terminology for records, tuples and primitives

  • [ x ] My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • [ x ] My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • [ ? ] My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

Right now, the proposal uses words like "const", "immutable" and "value types", without really defining them. These are sort of historical artifacts of previous drafts based around other words. One important thing is that deeply frozen objects don't have this property, whatever we call it.

I want to suggest that we try to minimize how much new jargon we coin, and instead talk about "records", "tuples" and "primitives". E.g., "Records can only contain records, tuples and primitives". "Records and tuples do not have identity, and instead have equality defined by deep comparison of their contents" etc. Let's avoid using other words, as they are likely to add confusion.

If we do want to introduce a term, let's choose one of the three (probably "value types"), and define it explicitly before using it.

Should Record wrappers have a null prototype?

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

I have concerns about value types using prototypes. Object.prototype is soo widely used that mutating and adding properties to it is virtually impossible without Symbols I feel like the same would become true for Record quickly. Likewise, adding methods to Array is very fraught with compatibility problems and have similar concerns for Tuple. Do we have reasons that we want property delegation to occur on the value itself vs something like using static methods instead? Is the usage of mutable prototypes intended to be for consistency or some other reason?

Could prototypes be added later if desired? It seems mutable prototypes goes against some of the intentions of immutable structures as properties on the prototype could be added or removed at runtime thereby affecting things like record.isSafe from const record = {| foo: 1 |} by the prototype delegation.

Naming

Hi guys,

Nice feature, I can't wait to be able to work with native immutable data structures in the future. I'm just not sure about the terminology:

I was mostly unfamiliar with the proper meaning of records and tuples before now, so I did a little research. What I found is that they represent simpler and more generic ideas than what this proposal introduced. Also, some programming languages use these terminologies but differently than how this proposal intends to implement them.

I'd propose just to stick with "immutable objects" and "immutable arrays".

Imagine your average JS dev. Already familiar with objects and arrays, probably already familiar with immutability thanks to React. It is easier for them to grasp the meaning of an "immutable array" than a "touple", even if they are the same thing.

I don't really see a drawback of simply prefixing already known features with "immutable" to name these features. It's boring. It's a bit longer. On the other hand, we get a simple, easy-to-understand naming which then can be applied to other data structures in the future (for example immutable maps or immutable dates)

Set-builder inspired notation for updates `#{ x | property = n }`

Some discussion has been brought up about whether with is appropriate. So here's a (sub)proposal inspired by Mint and Set builder notation.

Note this issue has been edited to keep things relevant.

Summary

The following syntax is proposed as a variation on x with .prop = n:

  • #{ x | prop = n, sub.prop = m }

The pipe could alternatively be replaced with "with", which in this case would be further contextualised by the #{ } brackets.

Inspiration

Mint uses, or rather, will use a set-builder inspired notation for record updates. This set builder notation is quite well known throughout the world of computer science, and while not all JavaScript developers have compsci backgrounds, there is a lot of inertia there.

This is a proposed example of a record update in Mint:

user =
  {
    name = "Stuart",
    address = {
      location = "Freedonia"
    }
  }

{ user |
  name = "Bob",  
  address.location = "Super Silly Fun Land"
}

(as taken from the Mint documentation, 18th Aug 2019)

Proposed use in JavaScript

Here, I would like to consider the above adapted for this JavaScript proposal, as follows:

let user = #{
  name: 'Stuart',
  address: {
    location: 'Sylvania',
  }
};

let newerUser = #{ user | name = 'Rob', address.location = 'Rather Serious Land' };

Thus: #{ RECORD | ASSIGNMENT_1, ..., ASSIGNMENT_N }.

In the ASSIGNMENTS section of the notation, direct properties of RECORD are now treated as variables which can be updated with any assignment operator (e.g. =, ++, +=, etc). In addition, you can update nested properties too.

This requires the reuse of one reserved symbol, |, normally the bitwise-or orperator, and introduces a new context for the interpreter/compiler.

The use of | here appears unambiguous, as I'm not sure how a bitwise-or could even be used in this context; to my knowledge, you'd always have a SyntaxError attempting to write something like this.

However, the use of with is also unambiguous here too (then again, with is fairly unambiguous in the original proposal, but at least in this case you don't need to do with .name = 'Bob' where a leading . is introduced). This should perhaps be separated out into a separate issue, perhaps at a later stage.

Comparisons

Compared to the with syntax already proposed:

let newerUser = user with .name = 'Bob', .address.location = 'Rather Serious Land';

Compared to possible use of + and - for updates (based on an idea shared by @ljharb in #41, see here):

let newerUser = user + #{ name: 'Bob', address: { location: 'Rather Serious Land' } };

Note that here, any additional properties of address are not overwritten, only location is updated.

Pros and cons

Pros

  • Resembles set-builder notation more

Cons

  • javascript record with is perhaps easier to search for than javascript record long-sticky-thing
  • | is still ambiguous in search engine results, just like with
  • still perhaps should be split off into another proposal that could include { ... | ..... } too.
  • Doesn't look very JS-y.

"Record" name clashes with spec-internal "Record Specification Type"

Whenever we refer to "Record" inside of a spec document, we start referring to 6.2.1 The List and Record Specification Types which is a spec-internal notion.

This is going to be a big issue that will hamper our ability to write accurate spec text.

This yields multiple questions:

  • (asking spec editors) Could we be able to not match "Record" in spec text with the spec-internal Record Specification Type?
  • (asking spec editors) Is it possible to change the spec-internal Record term to something similar?
  • Do we want to consider other terms instead of Record for this proposal?

As far as alternate names goes for this proposal or for renaming the spec-internal Record Specification Type, the only one we found so far is: Struct

Finally as champions, we have a preference with avoiding to rename the proposal's Record to something else.

This issue needs to be closed before stage 3

Polyfill or demo implementation

Is there currently no demo implementation of the proposal? I've looked through this repo without finding it. It would be useful to at least have the Record and Tuple functions if not the syntax. My particular interest is making it easier to compare to other value type implementations.

Related: #69

Interactions with Symbols

Assuming we want const values to allow efficient cloning across process boundaries, we need to define serialization. How will Symbols be handled in this case?

Frozen prototypes and polyfilling ConstArray methods

Array.prototype sometimes gets new methods over time, so ConstArray.prototype may get additional methods. Polyfills fill the gap by adding these methods to existing prototypes. For this reason, I'm leaning towards @const classes not actually having frozen prototypes.

Tips for possible implementors

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

Just wanted to drop in so it wouldn't get lost down the road: I recently discovered a research paper from 2015 + associated thesis + associated library, still actively maintained for a way of implementing immutable hash tables faster and much more memory-efficiently than HAMTs. This particular version targets the JVM, but based on its design, I would expect it to be easily adapted to native. Note that although it does boast massive perf wins with iteration, this would not likely directly translate to a similarly fast for ... of unless there's some iteration order guarantee in it I'm missing that would make sense to expose to the user.

Nothing actionable here, just wanted to make it known and provide a place for people to be pointed to.

Integration with type systems

This proposal sounds like it'll be pretty great for type systems like TypeScript, since it's analogous to ordinary objects. Developers may also want to declare an argument type @const, which I think should also work well. We should bring type system maintainers in the loop early and see how this works out, to verify these assumptions.

Should `with` even be a part of this proposal?

Continuing a thread from #1 – is there a compelling reason to include with in this proposal?

I don't believe this proposal needs the with syntax to be viable. I also doubt that the proposed with syntax needs to be specific to immutable data structures – someObject with .b=5 would be handy as sugar for Object.assign({}, someObject, { b: 5 }).

Context:

Example indicates object strict equality?

Does the const somehow change the way === works?

assert(updatedData === const [
    { ticker: "AAPL", lastPrice: 195.891 },
    { ticker: "SPY", lastPrice: 286.61 },
]);

Are const object/arrays iterable?

#25 clarified enumeration order, but did not say whether const things are iterable.

Can I do:

let constObj = #{ a:1, b:2 };
for (let i of constObj) { console.log(i) }

let constArr = #[ 1, 2 ];
for (let i of constArr) { console.log(i) }

JSON.stringify examples have incorrect output

  • My issue is not bikeshedding. (we can bikeshed at a later proposal stage)
  • My issue is not a request to add lots of stuff to the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too in-the-weeds.

A really minor issue:
JSON.stringify(#{ a: #[1, 2, 3] }); should result in '{"a":[1,2,3]}' (and not "{a: [1, 2, 3]}", as in the current example).
Same goes for the second example in https://github.com/rricard/proposal-const-value-types#jsonstringify

Happy to make a PR to fix it :)

Expand the FAQ. What's your question that you'd like to see answered?

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

I bet people have some more questions that they'd like answers to. A couple questions that come to mind for me:

  • Why is there no way to do a quick, reference-based equality check?
  • Why can records and tuples only contain records, tuples or other primitives?
  • Isn't the # character taken by private fields and methods?

If you have additional questions, please add them to this thread so we can make sure that they are well-documented.

Tuples across Realms

Here it mentions that "the Tuple prototype is an object that contains the same methods as Array with a few changes". This could lead to some problems when dealing with tuples across realms. Namely, it's possible to have two tuples that are "identical" but their prototypes are not (in the same way that Object.getPrototypeOf(arrayFromRealmA) !== Object.getPrototypeOf(arrayFromRealmB)). This can be of particular importance if we expect these prototypes to really be "just objects", where one could modify them, leading to a situation where the prototypes are in fact quite different across different realms beyond merely not being "value identical". You can imagine someone removing .pop for example, and arguably this is an interesting question even in the non-multi-realm case.

So, a couple options:

  1. One could argue that perhaps that prototypes of tuples should be Records themselves, ensuring that they can't be modified and avoiding a whole class of weird errors normal objects have, while allowing tuples from different realms to be trivially comparable. Of course, the main problem with this is that Records under the current spec do not allow storing functions, since functions aren't value types. It would be interesting to allow function value types (functions that do not reference anything outside their scope, aka supercombinators, and that effectively act as frozen -- the built-ins like pop would have native implementations so they would not have to be strangely written).

  2. Maybe tuples shouldn't have prototype methods at all, and we can simply provide "utility" functions to use with them, like pop(tuple). With the "::" syntax, we could also have almost the same usage: tuple::pop()

  3. The tuple prototype is some sort of super magic object that is both frozen and identical across realms (preferably not this).

with syntax

I was picturing that the syntax would be obj with .prop = value, and I see the syntax here as obj with prop = value. To me, seeing prop bare like that makes me think of a lexically scoped variable. Are you sure about omitting the .?

ConstMap and ConstSet

It may be useful to have deeply immutable Map and Set types. Should these be included in this proposal? Or a follow-on?

Destructuring supported?

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

Not sure about this, but is destructuring supported? E.g. does this work:

const {x, y} = #{x: 4, y: 5}

I'm guessing it would, but it's definitely grey area, and I believe should be mentioned in the proposal.

But it gets worse:

const {x, y, ...z} = #{x: 4, y: 5, a: 6, b: 7}

I would probably want z to be #{a: 6, b: 7} and not {a: 6, b: 7}, so theoretically, we should have this syntax:

const #{x, y, ...z} = #{x: 4, y: 5, a: 6, b: 7}

Sidenote:
If the last is true, it puts a damper on the const alternative to #, because this would look like this:

const const {x, y, ...z} = #{x: 4, y: 5, a: 6, b: 7}

Computed keys

Great proposal! 👊

It would be useful to see a Record constructed with keys from a Symbol, a variable and an expression. E.g. something akin to computed property names.

(It would also be useful to call it out as not being supported if it isn't supported.)

What is a const object?

So we're going to dump the name const object and favor a more different naming (for the proposal and the results of a typeof)

For now we have multiple ideas:

  • Immutable objects/arrays - typeof immObjOrArr === "immutable"
  • Records/Tuples - typeof recordOrTuple=== "record"
  • Your idea here!

@ljharb, I am very interested in what you think!

`Object.is` semantics?

Edit: clarity.

Two questions on this subject:

  1. Should Object.is carry pointer equality semantics or === semantics?
  2. If it carries pointer equality semantics, how much of its identity should be left to the implementation (for optimization purposes)?

I'm leaning towards this:

  1. Pointer equality.
  2. (a => Object.is(a, a))(o) must return true for every const value o, Object.is(a, b) must return false if a !== b or if a and b are both const values from different realms, and it's implementation-dependent in all other cases.

Const objects and const arrays are primitives

Part of the idea I was thinking about for const objects and arrays is that they would be JavaScript "primitives", in the sense that TC39 has long been thinking about "value types". This has a few implications:

  • They are not functions and cannot be called. So the constructor of a const class is an object. (It may be deeply frozen, though.)
  • They are constructed with functions, and not with new. new always returns an object. If a constructor returns a primitive, the this value is used for the return value instead.
    • An implication of this is that @const classes will use callable constructors; maybe this could be done with a construct like @const class Foo { @call constructor() { } }
  • Const objects and arrays cannot be used as keys in WeakMap, since those only have object keys
  • Although @const [] === @const [], each time an object wrapper is made of them, it's unique: Object(@const []) !== Object(@const []).

Document aspects of motivation and clarify semantics

This proposal makes a lot of decisions. Could you include in the README notes about why they were made? For example:

  • Why are objects deeply immutable, and not allowed to point to mutable things?
  • How does this relate to the decorators proposal?
  • Why are the features included here important?
  • What possible follow-on proposals could be made, and how would they build off of this proposal?

Object.defineProperty() on a Record or Tuple (or wrapper)

Can we spell out how Records and Tuples would interact with Object.defineProperty() and similar APIs? Are they to be considered as having extensions prevented and with all fields being non-writable and non-configurable, or is there a more explicit behaviour intended?

Collect developer feedback about the ergonomics of `#{ }`/`#[ ]` (vs alternatives `@[ ]`/`@{ }` or `{| }`/`[| ]`)

Notice by @rricard: This issue is now only open to discuss the following alternatives: #{ }/#[ ], @{ }/@[ ], {| |}/[| |] or {| }/[| ]


Original issue text:

Most people I've talked to are pretty positive about @const in two ways:

  • Since it applies deeply on the data structure, it's not too wordy to have these extra letters at the beginning. And a terser syntax (like #{ } / #[ ]) might be too cryptic and confusing.
  • The name @const makes sense, since it's just like declaring a variable const, but it goes deeply through the data structure.

It'd be good to continue collecting feedback to understand if these intuitions are widely shared.

Interaction with Structured Clone algorithm

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

The structured clone algorithm defines how JavaScript values can be shared with workers. It currently supports all JavaScript value types except for symbols. It could be worth determining if the intention would be to allow this type to be handled through the algorithm on the web as a use case for sharing immutable data with workers.

How Immutable Records Improve uniqBy Proposal

I have opened a proposal on the TC39 Discourse for adding Array.prototype.uniqBy to the language.

Implementing uniqBy would be made better through the use of Immutable Records. Since Immutable Records have built-in value equality they could be used in the following way to support uniqBy on multiple values:

[ {a: 1, b: 1},  {a: 1, b: 2} ].uniqBy(x => #[ x.a, x.b ]); // [ {a: 1, b: 1},  {a: 1, b: 2} ]

I think these two proposals have good synergy and could be used to help one another become part of the language.

Other links:
Array.prototype.uniqBy Github document

Consider JSON.parse

I concur with the proposal that records and tuples serialize to JSON via JSON.stringify as object and array. It strikes me that there may be value in providing support for going in the other direction, to somehow parameterize JSON.parse to yield a structure consisting of records and tuples. Obviously, this could be implemented in user land with the existing JSON.parse and some tree crawling and type conversion logic, but supporting it directly would be a good deal more efficient. Since many use cases for parsing JSON (actually, I'd speculate a majority of them) neither need nor want the parsed result to be mutable this could be a win.

User-defined `@const` classes, and Temporal

I was imagining that we might have @const class Point { x; y; } declarations, which would declare a class that can then be "instantiated" as @const(Point) { x: 1, y: 2 }. I can understand if these are left for a future proposal, though.

If user-defined @const classes are left for later, I'd suggest that Temporal also shouldn't have special support for being considered const, since it's working towards a goal of being faithfully implementable in JavaScript. I'd suggest leaving this all for the future, and not blocking Temporal on this proposal.

Order dependence for equality

From the readme:

if they have the same values stored and inserted in the same order, they will be considered strictly equal

This is surprising to me. Contrast python dicts or Purescript records.

It's also just a weird path-dependence in values: I would not expect it to matter if I construct something by doing (const { a: 0 }) with .b = 1 or (const { b: 1 }) with .a = 0

Have you considered using lenses for updates?

For those who aren't familiar with lenses:

I know this sounds a bit like I'm getting ready to suggest some obscure functional programming concept, but I'm not.

Lenses are simple {get(o): v, set(o, v): o} pairs. They are really nice for operating on nested data without having to deal with all the boilerplate of the surrounding context. They are easy to write and easy to compose.

// Gets and sets `b` inside of `a`
function nest(a, b) {
    return {
        get: o => b.get(a.get(o)),
        set: (o, v) => a.set(o, b.set(a.get(o), v)),
    }
}

// Updates a value inside a view
function modify(o, lens, func) {
    return lens.set(o, func(lens.get(o)))
}

// Updates a value inside a view path
// Called as `modifyIn(o, func, ...lenses)`
function modifyIn(o, func, lens, ...rest) {
    if (rest.length === 0) return modify(o, lens, func)
    return modifyIn(o, (v) => modify(v, lens, func), ...rest)
}

That modify operation is really where all the power in lenses lie, not really the get or set.

Edit: Forgot to double-check some variable names.
Edit 2: Fix yet another bug. Also, make it clearer which edits apply to this section.


Edit: clarity, alter precedence of function call vs lens
Edit 2: Update to align with the current proposal

Lenses are pretty powerful and concise, and they provide easy, simple sugar over just direct, raw updates. But it kind of requires revising the model a bit:

  • Getting properties could be something like object.@lens
  • Setting properties could be something like object with .@foo.@bar.@baz = value, ...
  • Updating properties could be something like object with .@foo.@bar.@baz prev => next, ...
  • Sugar would exist for properties, using .prop instead of .@lens
  • Sugar would exist for indices, using [key] instead of .@lens
  • In each of these, you can do stuff like .@foo(1, 2, 3).@bar("four", "five", "six") and so on. The functions would be applied eagerly before actually evaluating updates. However, member expressions like @(foo.bar) need parenthesized.
  • Yes, you can merge this all, like in object with [email protected][baz] prev => next, ....

Of course, I'm not beholden to the syntax here, and I'd like to see something a little more concise.

For a concrete example, consider this:

const portfolio = #[
    { type: "bond", balance: 129.46 },
    { type: "bond", balance: 123.54 },
];

const updatedData = portfolio
    with [0].balance = addInterest(portfolio[0].balance, 1.92),
         [1].balance = addInterest(portfolio[1].balance, 1.25);

assert(updatedData === #[
    { type: "bond", balance: ... },
    { type: "bond", balance: ... },
]);

With my suggestion, this might look a little closer to this:

const portfolio = #[
    { type: "bond", balance: 129.46 },
    { type: "bond", balance: 123.54 },
];

const updatedData = portfolio
    with [0].balance prev => addInterest(prev, 1.92),
         [1].balance prev => addInterest(prev, 1.25);

assert(updatedData === #[
    { type: "bond", balance: 129.46 * (1 + 1.92/100) },
    { type: "bond", balance: 123.54 * (1 + 1.25/100) },
]);

const incrementWages = data =>
    data with .byCity = data.byCity.map(
        ([city, list]) => #[city, list.map(
            item => item with hourlyWage = item.hourlyWage + 1
        )]
    )

If you wanted to push or pop in the middle, you could do this:

record with .list l => l.pop()

Lenses would have to have get and set, but if you make the set to always just an update function, you could also make it a little more useful and powerful. You could then change it to this:

  • Get: object.@lenslens.get(object)
  • Update: object with .@lens by funclens.update(object, func)
  • Sugar: object with .@lens = valueobject with .@lens by () => value
  • Sugar: object with .@lens += valueobject with .@lens by prev => prev + value
  • Chaining: object with .@lens.@lens, object with .@lens.@lens = value, etc.
  • Lens: {get(object), update(object, ...args)}
  • .key and [index] can be used in place of a lens as sugar for using the lens @updateKey("key") and @updateKey(index)
    • Static: object with .key by update, object with .key = value
    • Dynamic: object with [index] by update, object with [index] = value
    • updateKey(key) here is an internal lens that returns the rough equivalent of {get: x => x[key], update: v => ({...v, [key]: value})}. This is not exposed to user code, and simple property access could be altered to be specified in terms of this.
  • In the future, this could be extended to work with mutable properties via x.@lens = value and x.@lens(update) as procedural operations.

And of course, it'd work similarly:

const portfolio = #[
    { type: "bond", balance: 129.46 },
    { type: "bond", balance: 123.54 },
];

const updatedData = portfolio
    with [0].balance(prev => addInterest(prev, 1.92)),
         [1].balance(prev => addInterest(prev, 1.25));

assert(updatedData === #[
    { type: "bond", balance: 129.46 * (1 + 1.92/100) },
    { type: "bond", balance: 123.54 * (1 + 1.25/100) },
]);

const incrementWages = data =>
    data with .byCity = data.byCity.map(
        ([city, list]) => #[city, list.map(
            item => item with .hourlyWage += 1
        )]
    )

However, the real power of doing it that way is in this:

// No, really. This is actually *that* concise and boilerplate-free.
const each = {update: (list, func) => #[...[...list].map(func)]}
const incrementWages = data =>
    data with .byCity.@each[1].@each.hourlyWage += 1

Here's what a transpiler might desugar that to:

const each = {update: (list, func) => list.map(func)}
const incrementWages = data =>
    #{...data, byCity: each.update(data.byCity,
        pair => #[pair[0], each.update(pair[1],
            item => #{...item, hourlyWage: item.hourlyWage + 1}
        ), ...pair.slice(2)]
    )}

Transpiler strategy

What should be the strategy for implementing this proposal in transpilers? I think it'd make sense to include a high-level design doc in this repository, but maybe outside the README. Perfect value semantics will not be possible; what compromises do we recommend?

Expand the glossary: What terms would you like to see defined?

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

This proposal uses a number of fancy TC39 words, and introduces more terms of its own. I think we should define more of them in the glossary at the bottom. For example:

  • Value type
  • Primitive
  • Record
  • Tuple
  • Observable

What terms in the README are confusing to you? Please post them here so we can include a definition.

Const objects should *not* be valid weak map keys

This can essentially allow people to assign mutable state to an immutable object, and thus it should be disallowed for the same reasons it's not possible to use strings, symbols, numbers, booleans, undefined, or null. It's also consistent with the idea that (const {a: 1}) === (const {a: 1}), despite them being not necessarily physically identical.

Motivation to use with as a keyword and not as a method

After reading the proposal text I didn't find a single example where with operator would look different then with method:

const record2 = record1 with .b = 5;
// ->
const record2 = record1.with({b: 5});
const tuple2 = tuple1 with [0] = 2;
// ->
const tuple2 = tuple1.with({0: 2});
// or
const tuple2 = tuple1.with([2]);
assert((record with [k] = 5) === #{ a: 1, b: 5, c: 3});
assert((tuple with [i] = 2) === #[2, 2, 3]);
// ->
assert(record.with({[k]: 5}) === #{ a: 1, b: 5, c: 3});
assert(tuple.with({[i]: 2}) === #[2, 2, 3]);
const updatedData = marketData
    with [0].lastPrice = 195.891,
         [1].lastPrice = 286.61;
// ->
const updatedData = marketData.with([
    {lastPrice: 195.891},
    {lastPrice: 286.61},
]);
assert((#{} with .a = 1, .b = 2) === #{ a: 1, b: 2 });
assert((#[ {} ] with [0].a = 1) === #[ { a: 1 } ]);
assert((x = 0, #[ {} ] with [x].a = 1) === #[ { a: 1 } ]);
// ->
assert(#{}.with({a: 1, b: 2}) === #{ a: 1, b: 2 });
assert(#[ {} ].with([{a: 1}]) === #[ { a: 1 } ]);
assert((x = 0, #[ {} ].with({[x]: {a: 1}})) === #[ { a: 1 } ]);
record with .a = 1
record with .a = 1, .b = 2
tuple with [0] = 1
record with .a.b = 1
record with ["a"]["b"] = 1
// ->
record.with({a: 1})
record.with({a: 1, b: 2})
tuple.with([1])
record.with({a: {b: 1}})
record.with({["a"]: {["b"]: 1}})
assert(record === record2 with .a = 1);
// ->
assert(record === record2.with({a: 1});
let record2 = record with .c = 3;
record2 = record2 with .a = 3, .b = 3;
// ->
let record2 = record.with({c: 3});
record2 = record2.with({a: 3, b: 3});
const newState = state with .settings.theme = "dark";
// ->
const newState = state.with({settings: {theme: "dark"}});

It's arguable if with should accept #{}/#[] or {}/[], but consider this open question just as a syntax difference

Typo: asset -> assert

There are a couple of example lines in the readme that misspell "assert" as "asset"

Should we expose a way to perform a "identity/reference equality" check?

  • My issue is not bikeshedding. (we can bikeshed at
    a later proposal stage
    )
  • My issue is not a request to add lots of stuff to
    the semantics. (we can add things in follow-on proposals)
  • My issue is appropriate for a stage 0 proposal and not too
    in-the-weeds.

We've gotten at least one piece of feedback that it would be nice to be able to perform a physical equality check on records and tuples, specifically because it is more common in functional languages to have both a physical and structural equality comparison operator.

I wasn't sure exactly why this would be useful, so I asked around, and got some feedback from some OCaml people. In OCaml, there is both a physical equality and structural equality operator. The physical equality operator was identified as most useful if the domain of values being compared is hash cons-ed, (effectively interned via a hash table). If the allocations of the values are entirely controlled, then all structurally unique values will only ever exist once, which means that physical equality guarantees structural equality. This would be useful as a way to provide a consistently fast comparison, as it would likely just result in a pointer check.

While this feature seems useful in the above mentioned scenario, I don't think that this proposal would benefit from it's inclusion. There is precedent (citation needed) for implementors rejecting features that expose significant implementation details, and exposing physical equality of value types would fall into that category pretty plainly. If the ecosystem started relying on the way that a specific implementation represented values, it would cause significant problems when/if the implementation wanted to change.

I'm opening this issue to catalog the feedback we've received, and the feedback on the feedback I've solicited.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.