Comments (11)
puts on level-mem
+ subleveldown
versus memory-level
+ sublevels, with json encoding. Win.
puts on level-mem
versus memory-level
versus memory-level
using strings internally. Double win.
from abstract-level.
iterator.next()
on level-mem
versus memory-level
, using json and utf8 valueEncodings. No difference (because the main cost is setImmediate).
from abstract-level.
iterator.next()
on level-mem
versus iterator.nextv(1000)
on memory-level
. Not a fair benchmark, but the new nextv()
API is an obvious win.
from abstract-level.
iterator.next()
on level
versus iterator.next()
on classic-level
. Slower. I reckon that's because I changed the structure of the cache (in short: [entry, entry, ..]
instead of [key, value, key, value, ..]
) which should make nextv()
faster. That'll be difficult to compare fairly.
from abstract-level.
Batch puts on level-mem
versus memory-level
. Win.
from abstract-level.
Gets on level-mem
versus memory-level
. Win.
However, memory-level
is slower when using a binary valueEncoding. That warrants a closer look.
from abstract-level.
However,
memory-level
is slower when using a binary valueEncoding. That warrants a closer look.
It's not due to binary. Happens on any encoding when this code path is triggered:
abstract-level/abstract-level.js
Lines 299 to 301 in d711af3
V8 has a performance issue with the spread operator when properties are not present. The following "fixes" it:
options.keyEncoding = keyFormat
options.valueEncoding = valueFormat
options = { ...options, keyEncoding: keyFormat, valueEncoding: valueFormat }
As does using Object.assign()
instead of spread:
Could switch to Object.assign()
but I do still generally prefer the spread operator, for being idiomatic (not being vulnerable to prototype pollution could be another argument but I don't see how that would matter here).
from abstract-level.
The same get()
performance regression exists on classic-level
. Using Object.assign()
would fix it.
from abstract-level.
Quick-and-dirty benchmark of streams, comparing nextv()
to next()
. Ref Level/community#70 and Level/read-stream#2.
Unrelated to abstract-level
, but it's a win.
classic-level | using nextv() | took 1775 ms, 563380 ops/sec
classic-level | using nextv() | took 1577 ms, 634115 ops/sec
classic-level | using nextv() | took 1549 ms, 645578 ops/sec
classic-level | using nextv() | took 1480 ms, 675676 ops/sec
classic-level | using nextv() | took 1572 ms, 636132 ops/sec
avg 1591 ms
level | using next() | took 1766 ms, 566251 ops/sec
level | using next() | took 1776 ms, 563063 ops/sec
level | using next() | took 1737 ms, 575705 ops/sec
level | using next() | took 1711 ms, 584454 ops/sec
level | using next() | took 1729 ms, 578369 ops/sec
avg 1744 ms
from abstract-level.
Did a better benchmark of streams. This one takes some explaining. In the graph legend below:
- new-nextv is using a
level-read-stream
on aclassic-level
iterator usingnextv(size)
- old-next is
level().createReadStream()
, i.e.level-iterator-stream
on aleveldown
iterator usingnext()
- new-next is using a
level-read-stream
on aclassic-level
iterator usingnext()
(a temporary code path for fair benchmarking) - new-nextv-tweaked uses userland options to make sure that the byte-hwm is more than the expected byte size of a
nextv(size)
array, because otherwiseclassic-level
would hit the byte-hwm first and return partially filled arrays - old-next-tweaked uses userland options to increase both byte-hwm and stream-hwm (manually creating a
level-iterator-stream
so that there's a way to specify both byte-hwm and stream-hwm). In such a way that the byte-hwm is effectively ignored and we can compare the effect of merely increasing stream-hwm.
Where "byte-hwm" is the highWaterMark on the C++ side, measured in bytes. And "stream-hwm" is the highWaterMark of object-mode streams, measured in amount of entries.
That's about half of the explainer needed... In hindsight I wish I didn't do the abstract-level
and nextv()
work in parallel. So please allow me to just skip to conclusions (and later document how a user should tweak their options):
nextv(size)
is faster thannext()
(compare new-nextv to old-next)- Though the refactorings needed to implement
nextv(size)
makenext()
slightly slower (compare old-next to new-next) - Both the old
next()
and the newnextv(size)
can be tweaked through options, butnextv(size)
can't be beaten. - Bottom line, streams became faster. I won't give any numbers, because there are too many factors.
TLDR: we're good. Most importantly, the performance characteristics of streams and iterators did not change, in the sense that an app using smaller or larger values (I used 100 bytes) would be hurt by upgrading to abstract-level
or classic-level
. That's because leveldown
internally already had two highWaterMark mechanisms; classic-level
merely "hoists" one of them up to streams. So if an app has extremely large values, we will not prefetch more items than before. If an app has small values, we will not prefetch less than before. If an app is not using streams, iterators still do prefetch (as you can see later when I finally push all code).
from abstract-level.
and later document how a user should tweak their options
Done in Level/classic-level#1
from abstract-level.
Related Issues (20)
- Confused about encoding HOT 3
- Events on root propagate pre-encoded value HOT 9
- chainedBatch._put/_del HOT 1
- clear with limit HOT 6
- getMany with partial result? HOT 2
- Fix Sauce Labs CI failure HOT 2
- Tracking issue: planned breaking changes in v2 HOT 3
- Canary-test v2
- Add option to disable `prewrite` hook HOT 2
- Expose name and/or local prefix of sublevel HOT 2
- Support a db.has(key) method for testing whether a key exists in the database HOT 2
- How to check if KEY exists or alternatively check if VALUE exists in all keys HOT 2
- allow for non-ASCII names when creating an `AbstractSublevel`
- List of available `abstract-level` backends? HOT 1
- How to determine the prefix in a sublevel HOT 5
- fix levelgraph HOT 2
- Dependency funding for Abstract-level HOT 3
- Ditch EventEmitter HOT 8
- Project Status? HOT 2
- Cannot use class-based custom encoder HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from abstract-level.