Giter Club home page Giter Club logo

paldb's Introduction

PalDB

Build Status Coverage Status

PalDB is an embeddable write-once key-value store written in Java.

What is PalDB?

PalDB is an embeddable persistent key-value store with very fast read performance and compact store size. PalDB stores are single binary files written once and ready to be used in applications.

PalDB's JAR is only 110K and has a single dependency (snappy, which isn't mandatory). It's also very easy to use with just a few configuration parameters.

Performance

Because PalDB is read-only and only focuses on data which can be held in memory it is significantly less complex than other embeddable key-value stores and therefore allows a compact storage format and very high throughput. PalDB is specifically optimized for fast read performance and compact store sizes. Performances can be compared to in-memory data structures such as Java collections (e.g. HashMap, HashSet) or other key-values stores (e.g. LevelDB, RocksDB).

Current benchmark on a 3.1Ghz Macbook Pro with 10M integer keys index shows an average performance of ~1.6M reads/s for a memory usage 6X less than using a traditional HashSet. That is 5X faster throughput compared to LevelDB (1.8) or RocksDB (4.0).

Results of a throughput benchmark between PalDB, LevelDB and RocksDB (higher is better):

throughput

Memory usage benchmark between PalDB and a Java HashSet (lower is better):

memory

What is it suitable for?

Side data can be defined as the extra read-only data needed by a process to do its job. For instance, a list of stopwords used by a natural language processing algorithm is side data. Machine learning models used in machine translation, content classification or spam detection are also side data. When this side data becomes large it can rapidly be a bottleneck for applications depending on them. PalDB aims to fill this gap.

PalDB can replace the usage of in-memory data structures to store this side data with comparable query performances and by using an order of magnitude less memory. It also greatly simplifies the code needed to operate this side data as PalDB stores are single binary files, manipulated with a very simple API (see below for examples).

Code samples

API documentation can be found here.

How to write a store

StoreWriter writer = PalDB.createWriter(new File("store.paldb"));
writer.put("foo", "bar");
writer.put(1213, new int[] {1, 2, 3});
writer.close();

How to read a store

StoreReader reader = PalDB.createReader(new File("store.paldb"));
String val1 = reader.get("foo");
int[] val2 = reader.get(1213);
reader.close();

How to iterate on a store

StoreReader reader = PalDB.createReader(new File("store.paldb"));
Iterable<Map.Entry<String, String>> iterable = reader.iterable();
for (Map.Entry<String, String> entry : iterable) {
  String key = entry.getKey();
  String value = entry.getValue();
}
reader.close();

For Scala examples, see here and here.

Use it

PalDB is available on Maven Central, hence just add the following dependency:

<dependency>
    <groupId>com.linkedin.paldb</groupId>
    <artifactId>paldb</artifactId>
    <version>1.2.0</version>
</dependency>

Scala SBT

libraryDependencies += "com.linkedin.paldb" % "paldb" % "1.2.0"

Frequently asked questions

Can you open a store for writing subsequent times?

No, the final binary file is created when StoreWriter.close() is called.

Are duplicate keys allowed?

No, duplicate keys aren't allowed and an exception will be thrown.

Do keys have an order when iterating?

No, like a hashtable PalDB stores have no order.

Build

PalDB requires Java 6+ and gradle. The target Java version is 6.

gradle build

Performance tests are run separately from the build

gradle perfTest

Test

We use the TestNG framework for our unit tests. You can run them via the gradle clean test command.

Coverage

Coverage is run using JaCoCo. You can run a report via gradle jacocoTestReport. The report will be generated in paldb/build/reports/jacoco/test/html/.

Advanced configuration

Write parameters:

  • load.factor, index load factor (double) [default: 0.75]
  • compression.enabled, enable compression (boolean) [default: false]

Read parameters:

  • mmap.data.enabled, enable memory mapping for data (boolean) [default: true]
  • mmap.segment.size, memory map segment size (bytes) [default: 1GB]
  • cache.enabled, LRU cache enabled (boolean) [default: false]
  • cache.bytes, cache limit (bytes) [default: Xmx - 100MB]
  • cache.initial.capacity, cache initial capacity (int) [default: 1000]
  • cache.load.factor, cache load factor (double) [default: 0.75]

Configuration values are passed at init time. Example:

Configuration config = PalDB.newConfiguration();
config.set(Configuration.CACHE_ENABLED, "true");
StoreReader reader = PalDB.createReader(new File("store.paldb"), config);

A few tips on how configuration can affect performance:

  • Disabling memory mapping will significantly reduce performance as disk seeks will be performed instead.
  • Enabling the cache makes sense when the value size is large and there's a significant cost in deserialization. Otherwise, the cache adds an overhead. The cache is also useful when memory mapping is disabled.
  • Compression can be enabled when the store size is a concern and the values are large (e.g. a sparse matrix). By default, PalDB already uses a compact serialization. Snappy is used for compression.

Custom serializer

PalDB is primarily optimized for Java primitives and arrays but supports adding custom serializers so arbitrary Java classes can be supported.

Serializers can be defined by implementing the Serializer interface and its methods. Here's an example which supports the java.awt.Point class:

public class PointSerializer implements Serializer<Point> {

  @Override
  public Point read(DataInput input) {
    return new Point(input.readInt(), input.readInt());
  }

  @Override
  public void write(DataOutput output, Point point) {
    output.writeInt(point.x);
    output.writeInt(point.y);
  }

  @Override
  public int getWeight(Point instance) {
    return 8;
  }
}

The write method serializes the instance to the DataOutput. The read method deserializes from DataInput and creates new object instances. The getWeight method returns the estimated memory used by an instance in bytes. The latter is used by the cache to evaluate the amount of memory it's currently using.

Serializer implementation should be registered using the Configuration:

Configuration configuration = PalDB.newConfiguration();
configuration.registerSerializer(new PointSerializer());

Use cases

At LinkedIn, PalDB is used in analytics workflows and machine-learning applications.

Its usage is especially popular in Hadoop workflows because memory is rare yet critical to speed things up. In this context, PalDB often enables map-side operations (e.g. join) which wouldn't be possible with classic in-memory data structures (e.g Java collections). For instance, a set of 35M member ids would only use ~290M of memory with PalDB versus ~1.8GB with a traditional Java HashSet. Moreover, as PalDB's store files are single binary files it is easy to package and use with Hadoop's distributed cache mechanism.

Machine-learning applications often have complex binary model files created in the training phase and used in the scoring phase. These two phases always happen at different times and often in different environments. For instance, the training phase happens on Hadoop or Spark and the scoring phase in a real-time service. PalDB makes this process easier and more efficient by reducing the need of large CSV files loaded in memory.

Limitations

  • PalDB is optimal in replacing the usage of large in-memory data storage but still use memory (off-heap, yet much less) to do its job. Disabling memory mapping and relying on seeks is possible but is not what PalDB has been optimized for.
  • The size of the index is limited to 2GB. There's no limitation in the data size however.
  • PalDB is not thread-safe at the moment so synchronization should be done externally if multi-threaded.

Contributions

Any helpful feedback is more than welcome. This includes feature requests, bug reports, pull requests, constructive feedback, etc.

Copyright & License

PalDB © 2015 LinkedIn Corp. Licensed under the terms of the Apache License, Version 2.0.

paldb's People

Contributors

anand-singh avatar efevretis avatar mbastian avatar mtth avatar sankethkatta avatar tejaspathak avatar xchrdw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paldb's Issues

java.io.IOException: 试图将文件指针移到文件开头之前。

i am a china developer.when i use PalDB,i occured some problem:

Caused by: java.io.IOException: 试图将文件指针移到文件开头之前。
at java.io.RandomAccessFile.setLength(Native Method)
at com.linkedin.paldb.impl.StorageWriter.buildIndex(StorageWriter.java:282)
at com.linkedin.paldb.impl.StorageWriter.close(StorageWriter.java:183)
at com.linkedin.paldb.impl.WriterImpl.close(WriterImpl.java:100)

my develop os is windows10

Add gradle wrapper config

The gradle wrapper allows running the build without having a pre-installed gradle. It'll also help running the build on CI environments.

about GC

If a large amount of data is in the memory, it will cause serious GC ?

NullPointerException

hi,
when i use paldb,i got NullPointerException sometimes(about 1 time every 1000000 request),here is the full message:
java.lang.NullPointerException
java.lang.RuntimeException: java.lang.NullPointerException
at com.linkedin.paldb.impl.ReaderImpl.get(ReaderImpl.java:126)
at com.linkedin.paldb.impl.ReaderImpl.get(ReaderImpl.java:104)

StoreReaderWriter

Is there any plan to support read and write operations over the same database instance?

A Exception Using reader.iterable()

When i follow the code example iterate on a store, some exception occur.
The iterate code as below:

Configuration config = PalDB.newConfiguration();
config.registerSerializer(new PointSerializer());
StoreReader reader = PalDB.createReader(new File("store.paldb"), config);
Iterable<Map.Entry<String, String>> iterable = reader.iterable();
for (Map.Entry<String, String> entry : iterable) {
    String key = entry.getKey();
        reader.get(key);
}
  • First test case

I create a paldb like this:

Configuration config = PalDB.newConfiguration();
config.registerSerializer(new PointSerializer());
StoreWriter writer = PalDB.createWriter(new File("store.paldb"), config);
for(int i = 1; i <= 10; i++) {
    Point point = new Point(i, i);
    writer.put(point.toString(), point);
}
writer.close();

Exception could be occur below:

Exception in thread "main" java.lang.IllegalArgumentException
    at java.nio.Buffer.position(Buffer.java:244)
    at com.linkedin.paldb.impl.StorageReader.getDataBuffer(StorageReader.java:365)
    at com.linkedin.paldb.impl.StorageReader.getMMapBytes(StorageReader.java:292)
    at com.linkedin.paldb.impl.StorageReader.access$6(StorageReader.java:289)
    at com.linkedin.paldb.impl.StorageReader$StorageIterator.next(StorageReader.java:444)
    at com.linkedin.paldb.impl.StorageReader$StorageIterator.next(StorageReader.java:1)
    at com.linkedin.paldb.impl.ReaderIterable$ReaderIterator.next(ReaderIterable.java:83)
    at com.linkedin.paldb.impl.ReaderIterable$ReaderIterator.next(ReaderIterable.java:1)
  • Second test case

I create a paldb like this:

    Configuration config = PalDB.newConfiguration();
    config.registerSerializer(new PointSerializer());
    StoreWriter writer = PalDB.createWriter(new File("store.paldb"), config);
    for(int i = 1; i <= 1000; i++) {
        Point point = new Point(i, i);
        writer.put(point.toString(), point);
    }
    writer.close();

Exception could be occur below:

Exception in thread "main" java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
    at com.linkedin.paldb.impl.ReaderIterable$ReaderIterator.next(ReaderIterable.java:89)
    at com.linkedin.paldb.impl.ReaderIterable$ReaderIterator.next(ReaderIterable.java:1)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at com.linkedin.paldb.utils.DataInputOutput.readUnsignedByte(DataInputOutput.java:115)
    at com.linkedin.paldb.impl.StorageSerialization.deserialize(StorageSerialization.java:834)
    at com.linkedin.paldb.impl.ReaderIterable$ReaderIterator.next(ReaderIterable.java:86)

I debug the code, but don't know why this happen.
I read StorageReader and do some change like below.It can avoid the exception i found, but i don't
understand exactly.

-        while (offset == 0) {
-          indexBuffer.get(currentSlotBuffer);
-          offset = LongPacker.unpackLong(currentSlotBuffer, currentKeyLength);
+        byte begin = 0;
+        while(0 == begin) {
+           byte[] temp = new byte[1];
+           indexBuffer.get(temp);
+           if(temp != null) {
+               begin = temp[0];
+           }
         }
+        byte[] temp = new byte[currentSlotBuffer.length - 1];
+        indexBuffer.get(temp);
+        currentSlotBuffer[0] = begin;
+        System.arraycopy(temp, 0, currentSlotBuffer, 1, currentSlotBuffer.length - 1);
+        offset = LongPacker.unpackLong(currentSlotBuffer, currentKeyLength);

Make StoreReader and StoreWriter autoclosable

Hi there,
I'm wondering, in the new release, can we upgrade the java source compatibility to 1.7, and then make StoreReader/Writer autocloseable? I can submit a PR for it.

Thanks,
Bowen

request for doc clarifications

Hi Mathieu,

Interesting project, thanks for open sourcing it! Could you please clarify the following (here or in the documentation)?

  • can you open a db for writing subsequent times?
  • what's the concurrency story? (e.g. concurrent writers, reads while writers are in progress, are readers lock/wait-free, etc)
  • when inserting duplicate keys, does it just throw an exception?
  • is it just a hash table, or does the data structure support sorted iteration?
  • what's your target java version?

Thanks,
Viktor

Improve throughput benchmarks

  1. Use JMH
  2. Create less garbage; don't invoke expensive Integer.toString() in the tight loop.
    Simple profiling shows that significant portion of benchmark time is spent in Integer.toString(); hence benchmarks results are "shifted" and could make wrong impression on the actual performance difference between the benchmarked stores. Creating garbage makes garbage collection to kick in unpredictably during benchmarking, that makes results less representative as well.

Why is the benchmark incomplete?

I found it curious that you compare with 2 alternatives for performance and with java hashsets for memory usage. It would be nice to compare both performance and memory with the same 3 alternatives, else it seems you are conveniently selecting data instead of showing the right tradeoff.

got IllegalArgumentException when writer.close, with 100,000,000 keys

I am building a paldb with 100,000,000 keys.
Exception as below, with all default value config:
java.lang.IllegalArgumentException: null
at java.nio.Buffer.position(Buffer.java:244) ~[na:1.8.0_171]
at com.linkedin.paldb.impl.StorageWriter.buildIndex(StorageWriter.java:311) ~[paldb-1.2.0.jar!/:na]
at com.linkedin.paldb.impl.StorageWriter.close(StorageWriter.java:185) ~[paldb-1.2.0.jar!/:na]
at com.linkedin.paldb.impl.WriterImpl.close(WriterImpl.java:96) ~[paldb-1.2.0.jar!/:na]

while db with 10,000,000 keys in same program is OK.

slotBuffer is all ZEROs for a specific key

We recently ran into strange issue where a StoreReader.get(Object key) given the key returned a null, but when we do StoreReader.iterable() we could see the key and the value printer correctly.

This happened only for one particular key. Debugging in the paldb library and see that the slotBuffer is loaded with ZERO’s for that particular key and offset results in a ZERO.

I have attached screenshot of the debug point. I am unable to re-produce this with the same key and dataset.

paldbissue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.