Giter Club home page Giter Club logo

community's Introduction

level

Universal abstract-level database for Node.js and browsers. This is a convenience package that exports classic-level in Node.js and browser-level in browsers, making it an ideal entry point to start creating lexicographically sorted key-value databases.

๐Ÿ“Œ Which module should I use? What is abstract-level? Head over to the FAQ.

level badge npm Node version Test Coverage Standard Common Changelog Community Donate

Table of Contents

Click to expand

Usage

If you are upgrading: please see UPGRADING.md.

const { Level } = require('level')

// Create a database
const db = new Level('example', { valueEncoding: 'json' })

// Add an entry with key 'a' and value 1
await db.put('a', 1)

// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])

// Get value of key 'a': 1
const value = await db.get('a')

// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
  console.log(value) // 2
}

All asynchronous methods also support callbacks.

Callback example
db.put('a', { x: 123 }, function (err) {
  if (err) throw err

  db.get('a', function (err, value) {
    console.log(value) // { x: 123 }
  })
})

TypeScript type declarations are included and cover the methods that are common between classic-level and browser-level. Usage from TypeScript requires generic type parameters.

TypeScript example
// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to Level<string, string>.
const db = new Level<string, any>('./db', { valueEncoding: 'json' })

// All relevant methods then use those types
await db.put('a', { x: 123 })

// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })

// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })

// It works the same for sublevels
const abc = db.sublevel('abc')
const xyz = db.sublevel<string, any>('xyz', { valueEncoding: 'json' })

Install

With npm do:

npm install level

For use in browsers, this package is best used with browserify, webpack, rollup or similar bundlers. For a quick start, visit browserify-starter or webpack-starter.

Supported Platforms

At the time of writing, level works in Node.js 12+ and Electron 5+ on Linux, Mac OS, Windows and FreeBSD, including any future Node.js and Electron release thanks to Node-API, including ARM platforms like Raspberry Pi and Android, as well as in Chrome, Firefox, Edge, Safari, iOS Safari and Chrome for Android. For details, see Supported Platforms of classic-level and Browser Support of browser-level.

Binary keys and values are supported across the board.

API

The API of level follows that of abstract-level. The documentation below covers it all except for Encodings, Events and Errors which are exclusively documented in abstract-level. For options and additional methods specific to classic-level and browser-level, please see their respective READMEs.

An abstract-level and thus level database is at its core a key-value database. A key-value pair is referred to as an entry here and typically returned as an array, comparable to Object.entries().

db = new Level(location[, options])

Create a new database or open an existing database. The location argument must be a directory path (relative or absolute) where LevelDB will store its files, or in browsers, the name of the IDBDatabase to be opened.

The optional options object may contain:

  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

See Encodings for a full description of these options. Other options (except passive) are forwarded to db.open() which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.

db.status

Read-only getter that returns a string reflecting the current state of the database:

  • 'opening' - waiting for the database to be opened
  • 'open' - successfully opened the database
  • 'closing' - waiting for the database to be closed
  • 'closed' - successfully closed the database.

db.open([callback])

Open the database. The callback function will be called with no arguments when successfully opened, or with a single error argument if opening failed. If no callback is provided, a promise is returned. Options passed to open() take precedence over options passed to the database constructor. The createIfMissing and errorIfExists options are not supported by browser-level.

The optional options object may contain:

  • createIfMissing (boolean, default: true): If true, create an empty database if one doesn't already exist. If false and the database doesn't exist, opening will fail.
  • errorIfExists (boolean, default: false): If true and the database already exists, opening will fail.
  • passive (boolean, default: false): Wait for, but do not initiate, opening of the database.

It's generally not necessary to call open() because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method like db.get() is called. It's also possible to reopen the database after it has been closed with close(). Once open() has then been called, any read & write operations will again be queued internally until opening has finished.

The open() and close() methods are idempotent. If the database is already open, the callback will be called in a next tick. If opening is already in progress, the callback will be called when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, if close() is called after open(), the database will be closed once opening has finished and the prior open() call will receive an error.

db.close([callback])

Close the database. The callback function will be called with no arguments if closing succeeded or with a single error argument if closing failed. If no callback is provided, a promise is returned.

A database may have associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to call db.close() to free up resources.

After db.close() has been called, no further read & write operations are allowed unless and until db.open() is called again. For example, db.get(key) will yield an error with code LEVEL_DATABASE_NOT_OPEN. Any unclosed iterators or chained batches will be closed by db.close() and can then no longer be used even when db.open() is called again.

db.supports

A manifest describing the features supported by this database. Might be used like so:

if (!db.supports.permanence) {
  throw new Error('Persistent storage is required')
}

db.get(key[, options][, callback])

Get a value from the database by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to decode the value.

The callback function will be called with an error if the operation failed. If the key was not found, the error will have code LEVEL_NOT_FOUND. If successful the first argument will be null and the second argument will be the value. If no callback is provided, a promise is returned.

db.getMany(keys[, options][, callback])

Get multiple values from the database by an array of keys. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the keys.
  • valueEncoding: custom value encoding for this operation, used to decode values.

The callback function will be called with an error if the operation failed. If successful the first argument will be null and the second argument will be an array of values with the same order as keys. If a key was not found, the relevant value will be undefined. If no callback is provided, a promise is returned.

db.put(key, value[, options][, callback])

Add a new entry or overwrite an existing entry. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.del(key[, options][, callback])

Delete an entry by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.batch(operations[, options][, callback])

Perform multiple put and/or del operations in bulk. The operations argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.

Each operation must be an object with at least a type property set to either 'put' or 'del'. If the type is 'put', the operation must have key and value properties. It may optionally have keyEncoding and / or valueEncoding properties to encode keys or values with a custom encoding for just that operation. If the type is 'del', the operation must have a key property and may optionally have a keyEncoding property.

An operation of either type may also have a sublevel property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. Keys and values will be encoded by the sublevel, to the same effect as a sublevel.batch(..) call. In the following example, the first value will be encoded with 'json' rather than the default encoding of db:

const people = db.sublevel('people', { valueEncoding: 'json' })
const nameIndex = db.sublevel('names')

await db.batch([{
  type: 'put',
  sublevel: people,
  key: '123',
  value: {
    name: 'Alice'
  }
}, {
  type: 'put',
  sublevel: nameIndex,
  key: 'Alice',
  value: '123'
}])

The optional options object may contain:

  • keyEncoding: custom key encoding for this batch, used to encode keys.
  • valueEncoding: custom value encoding for this batch, used to encode values.

Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the 'utf8' encoding and the second with 'json'.

await db.batch([
  { type: 'put', key: 'a', value: 'foo' },
  { type: 'put', key: 'b', value: 123, valueEncoding: 'json' }
], { valueEncoding: 'utf8' })

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

chainedBatch = db.batch()

Create a chained batch, when batch() is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations. Depending on how it's used, it is possible to obtain greater performance with this form of batch(). On browser-level however, it is just sugar.

await db.batch()
  .del('bob')
  .put('alice', 361)
  .put('kim', 220)
  .write()

iterator = db.iterator([options])

Create an iterator. The optional options object may contain the following range options to control the range of entries to be iterated:

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • reverse (boolean, default: false): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.
  • limit (number, default: Infinity): limit the number of entries yielded. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be returned instead of the lowest keys.

The gte and lte range options take precedence over gt and lt respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unless reverse is true). In addition to range options, the options object may contain:

  • keys (boolean, default: true): whether to return the key of each entry. If set to false, the iterator will yield keys that are undefined. Prefer to use db.keys() instead.
  • values (boolean, default: true): whether to return the value of each entry. If set to false, the iterator will yield values that are undefined. Prefer to use db.values() instead.
  • keyEncoding: custom key encoding for this iterator, used to encode range options, to encode seek() targets and to decode keys.
  • valueEncoding: custom value encoding for this iterator, used to decode values.

๐Ÿ“Œ To instead consume data using streams, see level-read-stream and level-web-stream.

keyIterator = db.keys([options])

Create a key iterator, having the same interface as db.iterator() except that it yields keys instead of entries. If only keys are needed, using db.keys() may increase performance because values won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.keys() does not take keys, values and valueEncoding options.

// Iterate lazily
for await (const key of db.keys({ gt: 'a' })) {
  console.log(key)
}

// Get all at once. Setting a limit is recommended.
const keys = await db.keys({ gt: 'a', limit: 10 }).all()

valueIterator = db.values([options])

Create a value iterator, having the same interface as db.iterator() except that it yields values instead of entries. If only values are needed, using db.values() may increase performance because keys won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.values() does not take keys and values options. Note that it does take a keyEncoding option, relevant for the encoding of range options.

// Iterate lazily
for await (const value of db.values({ gt: 'a' })) {
  console.log(value)
}

// Get all at once. Setting a limit is recommended.
const values = await db.values({ gt: 'a', limit: 10 }).all()

db.clear([options][, callback])

Delete all entries or a range. Not guaranteed to be atomic. Accepts the following options (with the same rules as on iterators):

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default: false): delete entries in reverse order. Only effective in combination with limit, to delete the last N entries.
  • limit (number, default: Infinity): limit the number of entries to be deleted. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be deleted instead of the lowest keys.
  • keyEncoding: custom key encoding for this operation, used to encode range options.

The gte and lte range options take precedence over gt and lt respectively. If no options are provided, all entries will be deleted. The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

sublevel = db.sublevel(name[, options])

Create a sublevel that has the same interface as db (except for additional methods specific to classic-level or browser-level) and prefixes the keys of operations before passing them on to db. The name argument is required and must be a string.

const example = db.sublevel('example')

await example.put('hello', 'world')
await db.put('a', '1')

// Prints ['hello', 'world']
for await (const [key, value] of example.iterator()) {
  console.log([key, value])
}

Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and real-time! Each sublevel is an AbstractLevel instance with its own keyspace, events and encodings. For example, it's possible to have one sublevel with 'buffer' keys and another with 'utf8' keys. The same goes for values. Like so:

db.sublevel('one', { valueEncoding: 'json' })
db.sublevel('two', { keyEncoding: 'buffer' })

An own keyspace means that sublevel.iterator() only includes entries of that sublevel, sublevel.clear() will only delete entries of that sublevel, and so forth. Range options get prefixed too.

Fully qualified keys (as seen from the parent database) take the form of prefix + key where prefix is separator + name + separator. If name is empty, the effective prefix is two separators. Sublevels can be nested: if db is itself a sublevel then the effective prefix is a combined prefix, e.g. '!one!!two!'. Note that a parent database will see its own keys as well as keys of any nested sublevels:

// Prints ['!example!hello', 'world'] and ['a', '1']
for await (const [key, value] of db.iterator()) {
  console.log([key, value])
}

๐Ÿ“Œ The key structure is equal to that of subleveldown which offered sublevels before they were built-in to abstract-level. This means that an abstract-level sublevel can read sublevels previously created with (and populated by) subleveldown.

Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on parent database and choice of encoding. Which is to say: binary keys are fully supported. The name must however always be a string and can only contain ASCII characters.

The optional options object may contain:

  • separator (string, default: '!'): Character for separating sublevel names from user keys and each other. Must sort before characters used in name. An error will be thrown if that's not the case.
  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

The keyEncoding and valueEncoding options are forwarded to the AbstractLevel constructor and work the same, as if a new, separate database was created. They default to 'utf8' regardless of the encodings configured on db. Other options are forwarded too but abstract-level (and therefor level) has no relevant options at the time of writing. For example, setting the createIfMissing option will have no effect. Why is that?

Like regular databases, sublevels open themselves but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. If the parent database is closed, then opening the sublevel will fail and subsequent operations on the sublevel will yield errors with code LEVEL_DATABASE_NOT_OPEN.

chainedBatch

chainedBatch.put(key, value[, options])

Queue a put operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY or LEVEL_INVALID_VALUE error if key or value is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.
  • sublevel (sublevel instance): act as though the put operation is performed on the given sublevel, to similar effect as sublevel.batch().put(key, value). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key and value will be encoded by the sublevel (using the default encodings of the sublevel unless keyEncoding and / or valueEncoding are provided).

chainedBatch.del(key[, options])

Queue a del operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY error if key is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • sublevel (sublevel instance): act as though the del operation is performed on the given sublevel, to similar effect as sublevel.batch().del(key). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key will be encoded by the sublevel (using the default key encoding of the sublevel unless keyEncoding is provided).

chainedBatch.clear()

Clear all queued operations on this batch.

chainedBatch.write([options][, callback])

Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.

There are no options (that are common between classic-level and browser-level). Note that write() does not take encoding options. Those can only be set on put() and del().

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

After write() or close() has been called, no further operations are allowed.

chainedBatch.close([callback])

Free up underlying resources. This should be done even if the chained batch has zero queued operations. Automatically called by write() so normally not necessary to call, unless the intent is to discard a chained batch without committing it. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the batch is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

chainedBatch.length

The number of queued operations on the current batch.

chainedBatch.db

A reference to the database that created this chained batch.

iterator

An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys in lexicographic order (in other words: byte order) which in short means key 'a' comes before 'b' and key '10' comes before '2'.

A classic-level iterator reads from a snapshot of the database, created at the time db.iterator() was called. This means the iterator will not see the data of simultaneous write operations. A browser-level iterator does not offer such guarantees, as is indicated by db.supports.snapshots. That property will be true in Node.js and false in browsers.

Iterators can be consumed with for await...of and iterator.all(), or by manually calling iterator.next() or nextv() in succession. In the latter case, iterator.close() must always be called. In contrast, finishing, throwing, breaking or returning from a for await...of loop automatically calls iterator.close(), as does iterator.all().

An iterator reaches its natural end in the following situations:

  • The end of the database has been reached
  • The end of the range has been reached
  • The last iterator.seek() was out of range.

An iterator keeps track of calls that are in progress. It doesn't allow concurrent next(), nextv() or all() calls (including a combination thereof) and will throw an error with code LEVEL_ITERATOR_BUSY if that happens:

// Not awaited and no callback provided
iterator.next()

try {
  // Which means next() is still in progress here
  iterator.all()
} catch (err) {
  console.log(err.code) // 'LEVEL_ITERATOR_BUSY'
}

for await...of iterator

Yields entries, which are arrays containing a key and value. The type of key and value depends on the options passed to db.iterator().

try {
  for await (const [key, value] of db.iterator()) {
    console.log(key)
  }
} catch (err) {
  console.error(err)
}

iterator.next([callback])

Advance to the next entry and yield that entry. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null, a key and a value. The type of key and value depends on the options passed to db.iterator(). If the iterator has reached its natural end, both key and value will be undefined.

If no callback is provided, a promise is returned for either an entry array (containing a key and value) or undefined if the iterator reached its natural end.

Note: iterator.close() must always be called once there's no intention to call next() or nextv() again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.

iterator.nextv(size[, options][, callback])

Advance repeatedly and get at most size amount of entries in a single call. Can be faster than repeated next() calls. The size argument must be an integer and has a soft minimum of 1. There are no options at the moment.

If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array. If no callback is provided, a promise is returned.

const iterator = db.iterator()

while (true) {
  const entries = await iterator.nextv(100)

  if (entries.length === 0) {
    break
  }

  for (const [key, value] of entries) {
    // ..
  }
}

await iterator.close()

iterator.all([options][, callback])

Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead use next(), nextv() or for await...of. There are no options at the moment. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. If no callback is provided, a promise is returned.

const entries = await db.iterator({ limit: 100 }).all()

for (const [key, value] of entries) {
  // ..
}

iterator.seek(target[, options])

Seek to the key closest to target. Subsequent calls to iterator.next(), nextv() or all() (including implicit calls in a for await...of loop) will yield entries with keys equal to or larger than target, or equal to or smaller than target if the reverse option passed to db.iterator() was true.

The optional options object may contain:

  • keyEncoding: custom key encoding, used to encode the target. By default the keyEncoding option of the iterator is used or (if that wasn't set) the keyEncoding of the database.

If range options like gt were passed to db.iterator() and target does not fall within that range, the iterator will reach its natural end.

iterator.close([callback])

Free up underlying resources. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the iterator is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

If a next() ,nextv() or all() call is in progress, closing will wait for that to finish. After close() has been called, further calls to next() ,nextv() or all() will yield an error with code LEVEL_ITERATOR_NOT_OPEN.

iterator.db

A reference to the database that created this iterator.

iterator.count

Read-only getter that indicates how many keys have been yielded so far (by any method) excluding calls that errored or yielded undefined.

iterator.limit

Read-only getter that reflects the limit that was set in options. Greater than or equal to zero. Equals Infinity if no limit, which allows for easy math:

const hasMore = iterator.count < iterator.limit
const remaining = iterator.limit - iterator.count

keyIterator

A key iterator has the same interface as iterator except that its methods yield keys instead of entries. For the keyIterator.next(callback) method, this means that the callback will receive two arguments (an error and key) instead of three. Usage is otherwise the same.

valueIterator

A value iterator has the same interface as iterator except that its methods yield values instead of entries. For the valueIterator.next(callback) method, this means that the callback will receive two arguments (an error and value) instead of three. Usage is otherwise the same.

sublevel

A sublevel is an instance of the AbstractSublevel class, which extends AbstractLevel and thus has the same API as documented above. Sublevels have a few additional properties.

sublevel.prefix

Prefix of the sublevel. A read-only string property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.prefix) // '!example!'
console.log(nested.prefix) // '!example!!nested!'

sublevel.db

Parent database. A read-only property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.db === db) // true
console.log(nested.db === db) // true

Contributing

Level/level is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Donate

Support us with a monthly donation on Open Collective and help us continue our work.

License

MIT

community's People

Contributors

artskydj avatar chesles avatar dependabot[bot] avatar dominictarr avatar greenkeeper[bot] avatar heapwolf avatar huan avatar jcrugzz avatar juliangruber avatar kesla avatar mafintosh avatar max-mapper avatar mcollina avatar no9 avatar obastemur avatar pgte avatar ralphtheninja avatar raynos avatar vweevers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community's Issues

Snapshot API

What about this API for snapshots

db#snapshot()

Create a new snapshot.

snapshot#get(key, value[, opts])

Read a single value from a snapshot.

snapshot#create{Read,Key,Value}Stream

Read multiple values from a snapshot.

snapshot#dispose()

Delete the snapshot.

Maybe also:

snapshot#{del,createWriteStream,...}

Just forwarded to db#*

Usage

For a consistend read stream this would be:

var snapshot() = db.snapshot();
snapshot.createReadStream()
  .pipe(...)
  .on('close', shapshot.dispose.bind(snapshot));

Standard status

  • level-fstream
  • lazy-open
  • level-ttl
  • abstract-leveldown
  • awesome
  • codec
  • concat-iterator
  • deferred-leveldown
  • electron-demo
  • encoding-down
  • errors
  • iterator-stream
  • level
  • level-browserify
  • leveldown
  • leveldown-hyper
  • level-hyper
  • level-js
  • level-rocksdb
  • level-test
  • levelup
  • level-ws
  • mem
  • memdown
  • packager
  • rocksdb
  • subleveldown
  • leveldown-mobile
  • level-mobile
  • level-basho
  • level-lmdb

Implement db#clear across the board

Background: Level/abstract-leveldown#24 (comment).

Phase 1: get it working

Phase 2: optimize

On the side

Create SauceLabs OSS accounts

As I wrote here:

It turns out that you can have only 1 subaccount, and it isn't public. Not all that useful.
So we'll need separate Sauce Labs accounts for each repo. We can however reuse our gmail address, with aliases. E.g. for the memdown account we'll use leveldb.org+memdown@, for abstract-leveldown we use leveldb.org+abstract.leveldown@, etc.

Might be easier to do this in one go. For:

  • memdown (previously used the main gmail address)
  • level-test (Level/level-test#62)
    • Create account
      • Username: level-test
      • Email: leveldb.org+level.test@
    • Activate OSS
    • Setup Travis and airtap
  • abstract-leveldown (Level/abstract-leveldown#123)
    • Create account
      • Username: abstract-leveldown
      • Email: leveldb.org+abstract.leveldown@
    • Activate OSS
    • Setup Travis and airtap
  • level-packager
    • Create account
      • Username: level-packager
      • Email: leveldb.org+level.packager@
    • Activate OSS
    • Setup Travis and airtap
  • levelup
    • Create account
      • Username: levelup
      • Email: leveldb.org+levelup@
    • Activate OSS
    • Setup Travis and airtap
  • level-js (Level/level-js#44)
    • Create account
      • Username: level-js
      • Email: leveldb.org+level.js@
    • Activate OSS
    • Setup Travis and airtap

And at a later time, maybe also:

  • level-browserify
  • level-mem
  • encoding-down
  • deferred-leveldown
  • level-codec
  • level-errors
  • level-iterator-stream
  • level-ws

Beef up benchmark suite

I'd like to drop the SQLite and Leveled bencharks, as we know we're faster than SQLite and also Leveled, as soon as #95 is done.

Then, add bencharks for all levelup features so we can test streams, encodings and more.

Are you cool with this?

badges?

Which badges do you want to use for level? (It's fine to say that you don't want any at all ๐Ÿ˜€) Anything in particular that you like more or is a must have? Which badges suck and are useless to you?

I personally like badges a lot, but it's also a fine line between when they are just too many and don't give any "valuable" information.

Add dependency-check to all repositories

  • abstract-leveldown
  • awesome
  • codec
  • community
  • concat-iterator
  • database
  • deferred-leveldown
  • electron-demo
  • encoding-down
  • errors
  • iterator-stream
  • lazy-open
  • level
  • level-browserify
  • leveldb.org
  • leveldown
  • leveldown-hyper
  • level-hyper
  • level-js
  • level-rocksdb
  • level-test
  • level-ttl
  • levelup
  • level-ws
  • mem
  • memdown
  • packager
  • rocksdb
  • subleveldown

Implement read-snapshots

options.snapshot = NULL; on an iterator will give a consistent snapshot view of the data while reading. db->GetSnapshot(); will create a snapshot and return a version that can be set against options.snapshot. The drawback of the latter is that you have to explicitly db->ReleaseSnapshot() to clean up versions you've created.

At least the NULL case would be handy to start with.

Remark warnings on no social profile cont.

lms@x260 ~/src/level/leveldown-hyper (prepare-2.0.0)
$ npm run remark 

> [email protected] remark /home/lms/src/level/leveldown-hyper
> remark README.md CONTRIBUTORS.md CHANGELOG.md UPGRADING.md -o

CHANGELOG.md
  1:1  info     skipping: no contributors heading found                                require-heading  remark-git-contributors

CONTRIBUTORS.md
  1:1  warning  no social profile for [email protected]                                 social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                       social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                  social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                       social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                 social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                             social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                 social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                 social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                             social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                               social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                         social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                   social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                 social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                             social           remark-git-contributors
  1:1  warning  no social profile for [email protected]  social           remark-git-contributors
  1:1  warning  no social profile for [email protected]                                  social           remark-git-contributors

README.md
  1:1  info     skipping: no contributors heading found                                require-heading  remark-git-contributors

UPGRADING.md
  1:1  info     skipping: no contributors heading found                                require-heading  remark-git-contributors

20 messages (โš  17 warnings)

Migrate to lerna

With monorepo would be easier to maintain dependencies and fix related issues.

Action required: Greenkeeper could not be activated ๐Ÿšจ

๐Ÿšจ You need to enable Continuous Integration on all branches of this repository. ๐Ÿšจ

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because we are using your CI build statuses to figure out when to notify you about breaking changes.

Since we did not receive a CI status on the greenkeeper/initial branch, we assume that you still need to configure it.

If you have already set up a CI for this repository, you might need to check your configuration. Make sure it will run on all new branches. If you donโ€™t want it to run on every branch, you can whitelist branches starting with greenkeeper/.

We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

Once you have installed CI on this repository, youโ€™ll need to re-trigger Greenkeeperโ€™s initial Pull Request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper integrationโ€™s white list on Github. You'll find this list on your repo or organiszationโ€™s settings page, under Installed GitHub Apps.

GitHub topics?

Does anyone make use of this? Should we make an effort to have the same set of topics on every repo?

Write tutorials for real world use cases

Beginners would feel way more welcome and we had more shareable stuff if we had some well written tutorials for realworld usecases, showing the leveldb way and showcasing plugins.

Initial ideas:

  • the redis twitter tutorial ported to leveldb
  • creating indexes
  • map reduce (should be for map-reduce beginners)
  • simple persistent realtime data with level-scuttlebutt
  • philosophical rant on databases that are monoliths
  • accessing a db from multiple processes
  • writing a levelUp plugin

More website-/app-y ideas would be great though

plugin injection points

So, we have had quite a bit of discussion, and tried various approaches to implementing plugins in levelup

https://github.com/rvagg/node-levelup/issues/search?q=plugins

quick summation of what has happened so far, I started experimenting with a crude monkey patching based approach, but ran into trouble with handling ranges - each plugin had manage which ranges it affected, which was tricky. I later refactored this to create a subsection of the database, with level-sublevel. This is a great improvement, because it allows you to extend a range within leveldb as if it's a whole db.

@rvagg has also experimented with exposing various integration points into levelup, https://github.com/rvagg/node-levelup/issues/92

personally, I am highly in favor of combining these two, and even merging sublevel into levelup, or at least, adding integration points to levelup so that level-sublevel does not have to monkey patch it.

the question is: what is the list of integration points that we need?

  • prehooks (intercept a mutation [batch, put, del])
  • posthooks (intercept a mutation callback)
  • encoding/key-encoding
  • setup (register special jobs that run first, after the database opens,
  • asynchronously delay mutations. **

** maybe. The ability to get the current values for keys before performing a mutation.
this would be useful for validation, and merging concurrent updates.

I have a plugin for this, but it hasn't been updated to work with level-sublevel yet. This differs from level-hooks, which only provides
a sync api.

A setup integration point will be useful for saving metadata about the database in the database, and maybe stuff like summaries about the current overall state - whether a schema change migration is complete, etc.

Any other suggestions?

replication

@gedw99: "I am building a 3D cad modelling system and tons of json data I need to
store on the servers in many data centers.
I run offline using indexdb and so need to also sync.

Originally I used pouchdb and couxhdb.

But I want to replace all of it with level dB."

  • what's the merge strategy?
  • will it be master-only?
  • how is your topology?

LevelDB & Node.js 'real world' use cases

I've read the docs on LevelDB and some topics on the LevelDB Google Group & StackOverflow, I understand for what it was built.

What I want to know is what are some of your use cases and on what scenario do you believe LevelDB & Node.js is a good fit.

I am not very experienced with DBs but I would like to learn and there's not too much info on LevelDB & Node.js

Thank you :)

State of documentation

The state of README.md (style updated with level badge etc), CHANGELOG.md and UPGRADING.md.

EDIT (@ralphtheninja): Updated with LICENSE.md/CONTRIBUTORS.md/README.md which essentially means "There is a LICENSE.md file and a CONTRIBUTORS.md file, where LICENSE.md links to CONTRIBUTORS.md and README.md links to LICENSE.md according to new and simplified format in https://github.com/Level/level-js/tree/ce8d77c89f38e444b6951e890aa3b5e72a221aaf#license"

EDIT (@vweevers): Added tasks to remove contributors from package.json and to remove copyright headers from code.

Below is a summary of all repositories. Some of these are not actively maintained and some might be more or less irrelevant for other reasons, e.g. maybe we don't need UPGRADING.md for electron-demo etc.

Please comment and/or edit this post if you think that something should be archived or need special care.

  • abstract-leveldown
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • awesome
    • README.md (needs to be re-generated after level-js and level-browserify)
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • codec
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • concat-iterator
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • community
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • deferred-leveldown
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • electron-demo
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • encoding-down
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • errors
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • iterator-stream
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • leveldb.org
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • leveldown
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • leveldown-hyper
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-hyper
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-js
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-test
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-rocksdb
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-ttl
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • levelup
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-ws
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • mem
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • memdown
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • packager
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • rocksdb
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • subleveldown
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • lazy-open (archived)
    • README.md
    • CHANGELOG.md
    • ~~ UPGRADING.md~~
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-lmdb (to be archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-mobile (to be archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • leveldown-mobile (to be archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-fstream (to be archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-basho (archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • level-browserify (archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • typings (archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code
  • database (archived)
    • README.md
    • CHANGELOG.md
    • UPGRADING.md
    • LICENSE.md/CONTRIBUTORS.md/README.md
    • Remove contributors from package.json
    • Remove copyright headers from code

Code coverage (nyc + coveralls)

  • abstract-leveldown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • codec
    • Enabled nyc and coveralls
    • Tests running at 100%
  • concat-iterator
    • Enabled nyc and coveralls
    • Tests running at 100%
  • deferred-leveldown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • electron-demo
    • Enabled nyc and coveralls
    • Tests running at 100%
  • encoding-down
    • Enabled nyc and coveralls
    • Tests running at 100%
  • errors
    • Enabled nyc and coveralls
    • Tests running at 100%
  • iterator-stream
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level
    • Enabled nyc and coveralls
    • Tests running at 100%
  • leveldown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • leveldown-hyper
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-hyper
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-js
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-rocksdb
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-test
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-ttl
    • Enabled nyc and coveralls
    • Tests running at 100%
  • levelup
    • Enabled nyc and coveralls
    • Tests running at 100%
  • multileveldown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-ws
    • Enabled nyc and coveralls
    • Tests running at 100%
  • mem
    • Enabled nyc and coveralls
    • Tests running at 100%
  • memdown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • packager
    • Enabled nyc and coveralls
    • Tests running at 100%
  • rocksdb
    • Enabled nyc and coveralls
    • Tests running at 100%
  • subleveldown
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-browserify (archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • lazy-open (archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-lmdb (to be archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • leveldown-mobile (to be archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-mobile (to be archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-fstream (to be archived)
    • Enabled nyc and coveralls
    • Tests running at 100%
  • level-basho (archived)
    • Enabled nyc and coveralls
    • Tests running at 100%

WebAssembly build of LevelDB

It would be nice to have webassembly build of leveldown. There are many cases when platform independent build is more preferable than top performance.

Example: wrapped Google's woff2 https://github.com/fontello/wawoff2. ~ 2x slower than native, but no need to recompile, download and so on. Convenient.

My approach to plugins

Posting in a new issue instead of #68 or #80 because this is about a specific implementation that I'd like comment on.

The LevelUP pluggability branch contains an extension to 0.6.x that uses externr to expose some basic extension points (more are possible of course). I've put an example in that branch here that places prefixes on keys (and removes them on the way out), and also prevents reading of certain other keys, purely to provide an example.

The externr approach is quite different from the level-hooks approach so I don't imagine this to be non-controversial. My hope is that the two approaches can coexist and perhaps leverage off each other.

While externr is pretty fast for the 'noop' case where you don't have a plugin operating on a given extension point, my guess is that it'll undo the performance gains in #90 (which haven't made it into a release yet fyi).

The way LevelUP extensions work is with a "use" call or option. There is a global levelup.use(plugin) that you can register plugins with any LevelUP instance created after that point. There is a "use" property on the options argument to levelup() when you're making a new instance, the property can point to a single plugin or an array of plugins. That instance will then use those plugins. Each instance will also expose a .use() method that can take single or arrays of plugins, so you could add plugins after an instance is created.

Plugins are simply objects whose keys are the extension points it wishes to inject itself in to. The values are functions that do the work. See the example linked to above to see what I mean.

The LevelDOWN plugins branch contains additions the implement a basic plugin system for the native layer by way of providing a Plugin class that can be extended. So far it has only one extension point, an Init(database) method that passes a newly created LevelDOWN database instance (i.e., when you call leveldown(location)). Plugins can then do what they like on the database object, mostly just adding methods or replacing existing ones I suspect. But I imagine also being able to offer a mechanism to insert a particular LevelDB comparator if you need your db sorted in a particular way. Or even more advanced, a replacement LevelDB filter if you have something more efficient than the default bloom filter.

Currently the way you put a plugin in LevelDOWN is with a global leveldown._registerPlugin(location) call, where location is the path to a .node object file it can dlopen(). Once loaded, the plugin is placed into a list of plugins and when needed the list is iterated over and each plugin has the appropriate method invoked (currently just Init() on instance creation).

Extending LevelDOWN is quite tricky, I'm not aware of any other native build offering plugins. So there are a few challenges. Currently there's an npm issue that's preventing it from being a seamless thing.

My working example is a range delete which I decided could be implemented outside of LevelUP/LevelDOWN and offered as a plugin. @Raynos has level-delete-range but when you have to do individual deletes on callbacks from an iterator it's majorly inefficient; you really want to be doing bulk deletes in one go, native & async.

Enter Downer RangeDel. I've exposed a .use() function on the exports so you have to explicitly run that to make it inject itself into the nearest leveldown it can find (hopefully there's just one, peerDependencies should help with that). When a LevelDOWN instance is created, it attaches a .rangeDel() method to the object. The plugin is able to reuse a lot of the existing LevelDOWN code for iterators so the .rangeDel() method has an almost identical options signature to a .readStream() in LevelUP. Doing the actual delete is a simple 5-line job but it's efficient and done in one go with no callback until it's completed.

Then, to make that available to LevelUP, I have Upper RangeDel. It has downer-rangedel as a dependency so you only need to load it alongside levelup to get going. I've also exposed a .use() method there so you have to explicitly invoke it too. It'll inject itself globally into LevelUP so it'll run in every LevelUP instance you create (but, you could opt out of global and levelup({ use: require('upper-rangedel') }) for an individual instance.

The code for this should be a bit easier to understand cause it's all JS. The plugin simply extends the "constructor" of each LevelUP instance and passes the call down to this._db.rangeDel() where it expects that Downer RangeDel has put a method. I've also added some arguments checking and mirroed the deferred-open functionality in the rest of LevelUP so you could do something like: levelup('foo.db').rangeDel() and it should work.

To show it in action, I have an example of Upper RangeDel in work. It uses "dev" tagged levelup and leveldown releases in npm as this is not available in current "latest" releases.

package.json

{
  "name": "example",
  "version": "0.0.0",
  "main": "index.js",
  "dependencies": {
    "levelup": "~0.7.0-b02",
    "upper-rangedel": "0.0.1"
  }
}

index.js

require('upper-rangedel').use()

var levelup = require('levelup')
  , db = levelup('/tmp/foo.db')
  , data = [
        { type: 'put', key: 'ฮฑ', value: 'alpha' }
      , { type: 'put', key: 'ฮฒ', value: 'beta' }
      , { type: 'put', key: 'ฮณ', value: 'gamma' }
      , { type: 'put', key: 'ฮด', value: 'delta' }
      , { type: 'put', key: 'ฮต', value: 'epsilon' }
    ]
  , printdb = function (callback) {
      db.readStream()
        .on('data', console.log)
        .on('close', callback)
        .on('error', callback)
    }

db.batch(data, function (err) {
  if (err) throw err
  console.log('INITIAL DATABASE CONTENTS:')
  printdb(function (err) {
    if (err) throw err
    db.rangeDel({ start: 'ฮฒ', limit: 3 }, function (err) {
      if (err) throw err
      console.log('\nDATABASE CONTENTS AFTER rangeDel({ start: \'ฮฒ\', limit: 3 }):')
      printdb(function (err) {
        if (err) throw err
        console.log('\nDone')
      })
    })
  })
})

Output

INITIAL DATABASE CONTENTS:
{ key: 'ฮฑ', value: 'alpha' }
{ key: 'ฮฒ', value: 'beta' }
{ key: 'ฮณ', value: 'gamma' }
{ key: 'ฮด', value: 'delta' }
{ key: 'ฮต', value: 'epsilon' }

DATABASE CONTENTS AFTER rangeDel({ start: 'ฮฒ', limit: 3 }):
{ key: 'ฮฑ', value: 'alpha' }
{ key: 'ฮต', value: 'epsilon' }

Done

You can run this now, but you'll have to do a second npm install after the first one finishes with a failure; this is something we'll need to overcome in npm or with some crazy hackery with gyp or npm pre/postinstalls.

This obviously needs more polish and thought before its production ready, but I also want to make sure I have some kind of agreement on the approach before I push ahead. So I need your thoughts!

What to do with the 'leveldb' package in npm

The current owner of the leveldb package in npm has agreed to let us take it over, we just need to figure out what we want to put there: my8bird/node-leveldb#52. Note that the current leveldb package isn't very happy with newer releases of Node, and certainly not 0.11 which is a major pain for any native addon to support.

The only caveat is that we just bump the major version so the existing code can be left intact.

Some (non-exhaustive) options:

  • Publish the current level package as leveldb
  • Publish leveldown as leveldb
  • Create something new that wraps leveldown to make something that behaves closer to the LevelDB API (I'm not sure what this would look like, LevelDOWN probably is that thing already)

Thoughts?

One additional thought from me is that as far as I know, what the current leveldb package does that we don't support is:

  • Explicit snapshots
  • Synchronous operations

While I'm not particularly in favour of adding the latter, I think we really should get to work on getting snapshotting in, it's the big missing thing and while it's not something we have people begging for, it might open up some interesting new experiments!

Add and remove npm owners

I've avoided @mentioning owners who are not (currently) active here, except when they are the sole owners or if already mentioned in this thread or a previous edit. For packages that are (to be) archived, we're more aggressive with removing owners.

Use undefined instead of error for non-existing records

This is a request for discussion about the semantics of db.get() when a key doesn't exist. Currently, an error with a non-enumerable type key is provided for missing keys:

db.get('does-not-exist', function (err, value) {
  console.log('err=', err);
  console.log('err.type=', err.type);
})

produces this output:

err= {}
err.type= NotFoundError

The error is not easy to inspect at a glance (a separate issue) and if you want to check for non-existence you've got to remember that the err.type for non-existence is NotFoundError.

Would it make more sense to just use undefined for the value and not go through the err parameter? undefined is already an invalid value to use for storage so it wouldn't conflict with anything unless there are some edge cases for custom valueEncodings I'm not aware of.

Address security concerns from community

In light of the recent event-stream incident, we (@ralphtheninja and I) want to take action to reduce the attack surface of packages maintained in Level.

Level has been and will remain an OPEN Open Source Project. While we recognize the risk of giving people owner rights, it has been vital to the open, transparent and dare I say loving nature of Level. We might add some policy, if it really benefits security. Keep in mind that too much policy can scare off contributors, put a burden on maintainers and provide a false sense of security, hiding real issues that are out of our control under a layer of bureaucracy that in addition impedes individual freedom.

Trust is essential in OSS and we want to be wary of knee-jerk reactions to incidents like event-stream.

That said, we are thinking about what we can do and open to any suggestions. After an initial brainstorm we came up with 3 actionable items and wished to move further discussion to GitHub for community input and transparency.

1. Reduce npm owners

  • Our npm packages have more owners than needed for continued maintenance. Go through the list of current owners and ask if they really want/need publish rights: #17.

2. Reduce GitHub organization owners

  • We have quite a few inactive members. We will ask them if they can be removed and list removed members as Collaborator Emeriti in a suitable place.

3. Archive unmaintained projects

Archival consists of:

  • Pinning dependencies (note: transient dependencies are out of our control)
  • Releasing a final patch version
  • Deprecating the package
  • Removing extraneous npm owners
  • Archiving the GitHub repository.

Candidates for archival:

Please edit the above list or leave a comment if you think one of these should not be archived.

Requirements for manifests

I felt the need to defragment various threads:

Prior art:

Requirements:

  • Manifests must be objects (established in Level/levelup#279)
  • Declare high-level features as booleans:
    • snapshot guarantees (use case: tests, consumers that require consistency)
    • exclusive access (use case: live streams, prehooks) (see Level/levelup#279)
    • binary keys (use case: tests, deciding on a network transport)
    • permanence (use case: packager tests)
    • seeking (use case: tests, multiget, skip scans)
  • Declare additional methods and properties (that are not part of the abstract API)
    • Examples:
      • approximateSize() (use case: defer/proxy/expose)
      • sublevels (use case: exposing them to clients)
    • Declare name, return type, sync/callback/promise
  • Nice to have: use manifests the other way around, to declare features that a plugin wants

Open questions:

  • Should manifests extend manifests from underlying downs? Note that underlying downs can be swapped at runtime.
  • In some cases, feature support depends on the runtime environment too. E.g. not all browsers support binary keys in level-js. Is that something we want to expose?

Ongoing and Future Work

This is a summary on what has been happening lately, what we're working on right now and what the future holds. If you want to see anything in particular done right now or cool things for the future, feel free to post suggestions and/or edit this message.

Ongoing

  • Run tests in Sauce Labs using airtap where applicable
  • Implement coverage across the board using coveralls.

Future

  • Add canary tests (Level/abstract-leveldown#184)
  • IDEA: Try to find good use cases for where we can make use of Property Based Testing. The idea is to try to build a test system that helps us find more bugs (see cryptpad with notes from PBT meetup)
  • Fix up the homepage running at leveljs.org. We have really been slacking off here and we could do some really cool stuff with tutorials, write blog posts on the project as a whole (this issue could be a post)
  • Bring back basho and lmdb into separate *downs (like rocksdb)? The easiest way is most likely to start from current state of leveldown and pull in the deps/ again from scratch.
  • Write more benchmarks and run them continuously

Achieved

  • Merge level(up) functionality into abstract-leveldown (#58, abstract-level)
  • Refactor (the dependencies of) subleveldown
  • Implement manifests (#83)
  • Make sure we have a set of publishers for all modules (#17)
  • Increase transparency with a public backlog: https://github.com/orgs/Level/projects/2
  • Homogenize and polish documentation (#29)
  • Port native modules to N-API (leveldown-hyper not yet done)
  • Refactor the internals of leveldown and friends, fixing resource cleanup and segfaults along the way (leveldown-hyper not yet done)
  • Complete and maintain level/awesome showing what's there and the status of related projects when it comes to dependencies etc
  • Implement seek() in more abstract-leveldown implementations
  • Drop key types other than strings and buffers from memdown and level-js (Level/memdown#186 (comment))
  • Address security concerns from community (#43)

plugins v2 (sectioned db, with hooks)

Basically, instead of hooking new behaviour into the main database, you can create sub sections:

SubLevel(db)
var fooDb = db.sublevel('foo', '~')

fooDb is an object with the levelup api, except when you do an fooDb.put('bar') the key is prefixed with ~foo~ so that it's separated from the other keys in the main db. This is great, because if you want to build a section of the database with special behaviour in it, you can create a subsection, and extend the behaviour in anyway you like -- but it will only affect that section! So, you can monkeypatch it - whatever, you won't introduce bugs into other parts of the program.

Most of the plugins I built needed some sort of interception, where a value is inserted into one range, which triggers an action which inserts something into a different section. To get reliably consistent data this needs to be atomic.

so, you can tie set hooks on a subsection, that trigger an insert into another subsection.

when a key is inserted into the main db, index that write with a timestamp, saved into another subsection.

var SubLevel = require('level-sublevel'); SubLevel(db)
var sub = db.sublevel('SEQ')

db.pre(function (ch, add) {
  add({
    key: ''+Date.now(), 
    value: ch.key, 
    type: 'put'
  }, sub) //NOTE pass the destination db to add
          //and the value will end up in that subsection!
})

db.put('key', 'VALUE', function (err) {

  //read all the records inserted by the hook!
  sub.createReadStream()
    .on('data', console.log)
})

db.pre(function hook (ch, add) {...}) registers a function to be called whenever a key is inserted into that section. ch is a change, like a row argument to db.batch: {key: k, value: v, type: 'put' || 'del'}.

If the hook function calls add(ch) ch is added to the batch (a regular put/del is turned into a batch) but, if a subsection is passed in add(ch, sub) then that put will be added to that subsection.

Compare subsection code for queue/trigger https://github.com/dominictarr/level-sublevel/blob/e2d27cc8e8356cde6ecf4d50c980c2ba93d87b95/examples/queue.js
with the old code -
https://github.com/dominictarr/level-queue/blob/master/index.js
and https://github.com/dominictarr/level-trigger/blob/18d0a1daa21aab1cbc1d0f7ff3690b91c1e0291d/index.js

the new version is only ~ 60 lines, down from about ~ 200, also it's possible to use multiple different queue/trigger libs within the same db. And also, there is no tricky code that refers to ranges or prefixes in the subsection based code!

in summary,

  • create subsections, add any features to your subsection.
  • use pre(fun) to trigger atomic inserts into your subsection.

Ideas for a query language

This is an idea for a level module, but am posting it here because it's too early in the morning/late at night to implement and this will notify exactly the right people.

level is pretty useful as a straight key value store, but sooner or later you need to query something else,
so you add an index. A lot of leveldb modules are about indexes. For example, level-seach which just indexes everything.

The trouble with all these indexes, is to manage them all - which index do you need for which query and when do you add a new index? there is a design trade off here.

It would be so much easier if you could just not think about indexes at all, just have filters,
just forEach (scan) over all the data and return only what you want. totally functional and elegant,
just not performant.

But what if you could use that interface, and it would transparently use the right indexes when possible? (or even better, generate them when necessary)

Lets say we have the npm registry, and we want to retrieve all the modules written by @substack,
even though substack has written many modules, it is still less than 1% of the registry, so a full scan
would be 99% wasted effort. Lets say that the average registry document is 5k, that is 95000*0.99*5000=470mb of unnecessary reads. 475mb total.

But lets say that we indexed all the maintainers? for simplicity lets say that the average length of both module names and user names is 8 characters. Say we create an index like {username}:{modulename} and index this for every module, 95000*(8+8+1) (1 for the separator) = 1.6mb of index data. but now we just need to read through 1% of the index (since they are ordered by username we can just jump straight to substack) 950*17 = 15200 that is only 15k of unnecessary data.
So, now we only read 15200 + 950*5000 = 4.7mb

Using this index we increase storage requirements by 1.6 mb but increase read perf by 100x.

But what about a more complex query? suppose we want to find modules written by substack that depend (directly) on async? now, a simple way to do this would be scan through the modules written by substack and check whether async is in the deps. Lets say there was a time when substack used async, in 9 modules. 1% of the time. the scan costs us 4.7 mb, like before, but we discarded 99% of it.

Okay, so lets create a filter for this. udm:{username}:{dep}:{module} where module depends on dep
this index is more expensive because it has needs another 9 characters, plus we need a tag to distinguish it from the other indexes. 3+1+3*(8+1) = 29 so this index would need 2.75mb.
how much data would it save us? now we could directly read out the 9 substack modules that use async, 29*9+ 9*5000 = 45k this is 1000 times less! but here is the problem: how do we know we want to perform this query? what if we want to know all the users that depend on optimist? we'd also need a dum:{dep}:{username}:{module} index - and so on, we want our database to be really really fast,
so lets just index every pair of properties twice!

Lets say there are 20 properties on average, how many combinations of 2 properties?
(20!/(2!*(20-2)!) which cancels down to (20*19)/2 = 275), times 2 because we need 2 indexes,
gives us: 550*29 = 15950, that is 3 times the size of the document. if there where 30 properties it would be 40*39*29 = 45240. doubling the size of the document increases the size of the index 3 times,
now 9 times the size of the original document.

Now, at some point, it's better to do a scan than have another index, because the indexes take up too much space, and optimize queries that you never use. Clearly this is dependant on what sort of queries you do in practice, possibly you could have a system that automagically creates indexes when you start to use a query more. maybe even could it know when you have an expensive query?

But is there a nice way that you can gracefully transition between those?
decouple the query interface from the indexes, so you can add and remove indexes
without changing the queries at all. And measuring the efficiency of the queries,
so that you know when new indexes are needed.

module system

rvagg:

I don't feel strongly either way about this. I'm not a strict minimalist but like Dominic I tend to use batch() programatically so a chaining API is less helpful (but I may use it).

What we really need is a proper plugin system for Node so a project like LevelUP can load optional plugins that may be installed in the NODE_MODULES path(s) and do things like expose the main LevelUP.prototype to plugins that may want to augment it. Then we could say something on our README like: "If you npm install levelup-batch, LevelUP will make the chaining batch() API available. And levelup-batch can monkey-patch LevelUP.prototype.batch to provide chaining.

I need something like this for Ender too so maybe I'll actually build a generic plugin helper system some day..

rvagg:

Re plugins, Grunt and DocPad do something similar already but certainly not in a modular way that you can pull out and re-use, we need something that smaller projects like LevelUP can easily include. We should collaborate on it so we get the API right!

plugin extension points: merge level-hooks / level-sublevel

I'm not suggesting we do this right away, but I am suggesting this is something worth considering.

Also, I don't want to merge this, unless there is a consensus that this is an easy and effective way to build plugins.

I'd also intend to refactor the code style, etc, to match levelup.

I think that given these base features, and maybe one or two more,
a wide array of plugins could be developed.

What about sublevels?

I think we need to invest some time in either level-sublevel or subleveldown, get one of them up to par with latest levelup, abstract-leveldown and encoding-down, maybe deprecate the other.

Thoughts @Level/owners?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.