Giter Club home page Giter Club logo

rethinkdb's Introduction

CII Best Practices Codacy Badge

What is RethinkDB?

  • Open-source database for building realtime web applications
  • NoSQL database that stores schemaless JSON documents
  • Distributed database that is easy to scale
  • High availability database with automatic failover and robust fault tolerance

RethinkDB is the first open-source scalable database built for realtime applications. It exposes a new database access model, in which the developer can tell the database to continuously push updated query results to applications without polling for changes. RethinkDB allows developers to build scalable realtime apps in a fraction of the time with less effort.

To learn more, check out rethinkdb.com.

Not sure what types of projects RethinkDB can help you build? Here are a few examples:

Quickstart

For a thirty-second RethinkDB quickstart, check out rethinkdb.com/docs/quickstart.

Or, get started right away with our ten-minute guide in these languages:

Besides our four official drivers, we also have many third-party drivers supported by the RethinkDB community. Here are a few of them:

Looking to explore what else RethinkDB offers or the specifics of ReQL? Check out our RethinkDB docs and ReQL API.

Building

First install some dependencies. For example, on Ubuntu or Debian:

sudo apt-get install build-essential protobuf-compiler \
    # python \  # for older distros
    python3 python-is-python3 \
    libprotobuf-dev libcurl4-openssl-dev \
    libncurses5-dev libjemalloc-dev wget m4 g++ libssl-dev

Generally, you will need

  • GCC or Clang
  • Protocol Buffers
  • jemalloc
  • Ncurses
  • Python 2 or Python 3
  • libcurl
  • libcrypto (OpenSSL)
  • libssl-dev

Then, to build:

./configure --allow-fetch
# or run ./configure --allow-fetch CXX=clang++

make -j4
# or run make -j4 DEBUG=1

sudo make install
# or run ./build/debug_clang/rethinkdb

See WINDOWS.md and mk/README.md for build instructions for Windows and FreeBSD.

Need help?

A great place to start is rethinkdb.com/community. Here you can find out how to ask us questions, reach out to us, or report an issue. You'll be able to find all the places we frequent online and at which conference or meetups you might be able to meet us next.

If you need help right now, you can also find us on Slack, Twitter, or IRC at #rethinkdb on Freenode.

Contributing

RethinkDB was built by a dedicated team, but it wouldn't have been possible without the support and contributions of hundreds of people from all over the world. We could use your help too! Check out our contributing guidelines to get started.

Donors

  • CNCF
  • Digital Ocean provides infrastructure and servers needed for serving mission-critical sites like download.rethinkdb.com or update.rethinkdb.com
  • Atlassian provides OSS license to be able to handle internal tickets like vulnerability issues
  • Netlify OSS license to be able to migrate rethinkdb.com
  • DNSimple provides DNS services for the RethinkDB project
  • ZeroTier sponsored the development of per-table configurable write aggregation including the ability to set write delay to infinite to create a memory-only table (PR #6392)

Licensing

RethinkDB is licensed by the Linux Foundation under the open-source Apache 2.0 license. Portions of the software are licensed by Google and others and used with permission or subject to their respective license agreements.

Where's the changelog?

We keep a list of changes and feature explanations here.

rethinkdb's People

Contributors

al3xandru avatar aleaxander avatar asakatida avatar atnnn avatar coffeemug avatar dalanmiller avatar danielmewes avatar deontologician avatar eliangidoni avatar encryptio avatar frank-trampe avatar gabor-boros avatar gchpaco avatar hungte avatar igorlukanin avatar jdoliner avatar jordanlewis avatar larkost avatar marshall007 avatar mglukhovsky avatar mlucy avatar neumino avatar raitobezarius avatar rntz avatar srh avatar takluyver avatar timmaxw avatar tryneus avatar vexocide avatar wmrowan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rethinkdb's Issues

Add a command to do merge on streams (opposite of without)

We need a command opposite of without that allows merging an object into every object in a stream. Currently it can be done via map + merge, but we need to add a porcelain to make the experience more pleasant.

EDIT: not sure what to call this command, since with is a keyword. Suggestions welcome.

Should .pluck error behavior be like .without's?

According to IRC user woogley, pluck(x, y, z) will error on rows lacking one of such keys x, y, or z. However, without(x, y, z) will not error if a row lacks such a key. In general it can be useful to trim down JSON documents to save on network traffic if you're only going to look at a subset of fields, even if some are lacking. Is erroring on rows that lack such fields the behavior we want?

errno 92 - Protocol not available

Trying to get started again with rethinkdb_1.2.5-1-0ubuntu1~precise_amd64.deb

$ rethinkdb
info: Creating directory rethinkdb_data
info: Creating a default database for your convenience. (This is because you ran 'rethinkdb' without 'create', 'serve', or '--join', and the directory 'rethinkdb_data' did not already exist.)
error: Error in arch/io/network.cc at line 705:
error: Guarantee failed: [res == 0]  (errno 92 - Protocol not available) setsockopt(TCP_USER_TiMEOUT) failed
error: Backtrace:
error: Sun Nov 11 13:47:49 2012

       1: rethinkdb() [0x515d92]
       2: rethinkdb() [0x512f54]
       3: rethinkdb() [0x5f61a4]
       4: rethinkdb() [0x5f993f]
       5: rethinkdb() [0x5f9f58]
       6: rethinkdb() [0x4a5b9e]
       7: rethinkdb() [0x89244d]
       8: rethinkdb() [0x897331]
       9: rethinkdb() [0x883b17]
       10: rethinkdb() [0x88515e]
       11: rethinkdb() [0x60bb6b]
       12: rethinkdb() [0x60bbf8]
       13: rethinkdb() [0x606f6e]
error: Exiting.
Crashing while already crashed. Printing error message to stderr.
Segmentation fault from reading the address (nil).[32251] worker: Couldn't read job function: end-of-file received
[32251] worker: Failed to accept job, quitting.
[32258] worker: Couldn't read job function: end-of-file received
[32258] worker: Failed to accept job, quitting.
Trace/breakpoint trap

Ubuntu 12.04 LTS (GNU/Linux 2.6.32-316-ec2 x86_64) on ext3

Data explorer tab completion has some regressions

  1. If I type a dot, I can then tab through the methods. But if I put a space after a method, then delete the space and hit tab, I can't tab through anymore. (it's a bit difficult to describe, talk to me and I'll show you what I mean if this doesn't make sense)
  2. Suppose I type the following: "r.table('foo').eqJoun('id', r.table('bar'), ..." After I type the second r., completion for eqJoin disappears, even after I put a coma after the second table reference. Again, subtle to describe in text, we can chat about it mon.

Should be pretty easy to fix, but user experience here is very important.

Allow Outdated not documented and doesn't seem to work properly

I cannot find a reference to the "allow outdated" flag in the API documentation on rethinkdb.com/api .
The JavaScript driver has it in the r.table function, the Python driver actually has flags at different places (init, table, run).

Also, when I make a request with allow_outdated like this (in the Data Explorer):

r.table('stress', true).run()

I'd expect that the request is served even if there are only secondaries available for the table.
However, when I kill some of the primaries (there is still one replica for each shard online), I get

Runtime Error: cannot perform read: No master available
    r.table('stress')
    ^^^^^^^^^^^^^^^^^

So either the flag is ignored in the JavaScript driver, or the server doesn't handle this case properly?

Api doc errors: map vs. concatmap, outerJoin, orderBy

Don_Pellegrino: Yup, I've just read through the list of functions, I found a copy-paste mistake too
Don_Pellegrino: map and concatMap are swapped
Don_Pellegrino: and the second half of the description of outerJoin is missing

@wmrowan -- could you fix this up when you get the chance?

web ui table view doesn't handle rapid resharding very well

Setup:

  1. Spin up a cluster with two machines.
  2. Run the stress client for a while to build up some data (I had 30k items).
  3. Shard the test.stress table into 4 shards

Reproduction:

  1. Start with one replica
  2. Set the number of replicas to 2
  3. Before backfilling completes, set the number of replicas back to 1
  4. Quickly set the number of replicas back to 2

Observe:
The progress bar section goes away
The available replicas flickers between 0/4, 0/8, 4/4, and settles on 4/8, where it stays until backfill completes.

It would be expected that the progress bar section would stay open, and the number of replicas doesn't flicker.

Segmentation Fault on start

Suddenly it just stop working:

$ rethinkdb
error: Error in arch/runtime/thread_pool.cc at line 323:
error: Segmentation fault from reading the address (nil).
error: Backtrace:
error: Sun Nov 11 13:08:02 2012

       1: rethinkdb() [0x76ed71]
       2: rethinkdb() [0x76f951]
       3: rethinkdb() [0x791234]
       4: +0xfcb0 at 0x7ff2b5d23cb0 (/lib/x86_64-linux-gnu/libpthread.so.0)
       5: rethinkdb() [0x79c37a]
       6: rethinkdb() [0x63a9a2]
       7: rethinkdb() [0x63e5c1]
       8: rethinkdb() [0x6285cc]
       9: rethinkdb() [0x632c6b]
       10: rethinkdb() [0x632cbb]
       11: rethinkdb() [0x7932cb]
       12: rethinkdb() [0x793319]
       13: rethinkdb() [0x79064e]
error: Exiting.
[5841] worker: Couldn't read job function: end-of-file received
[5840] worker: Couldn't read job function: end-of-file received
[5843] worker: Couldn't read job function: end-of-file received
[5840] worker: Failed to accept job, quitting.
[5841] worker: Failed to accept job, quitting.
[5843] worker: Failed to accept job, quitting.
[1]    5767 segmentation fault (core dumped)  rethinkdb serve

Uname:
Linux alejandro 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

Not sure what to do.

(Minor) Warning: could not determine the version, using the default version '0.0-internal-unknown'

Following the instructions at http://www.rethinkdb.com/docs/build/ on Ubuntu 12.10 and Debian 6, make can't determine the version (but the build continues).

$ wget https://github.com/rethinkdb/rethinkdb/archive/next.zip
$ unzip next.zip
$ cd rethinkdb-next/src
$ make DEBUG=0
../scripts/gen-version.sh: Warning: could not determine the version, using the default version '0.0-internal-unknown' (defined in ../scripts/gen-version.sh)

Make primary key default happen on the server for eqJoin, between, and get

Currently the clients hardcode 'id' for primary key. Here's what we need to do:

  1. Have the clients provide no default, and default to real primary key on the server.
  2. Still allow providing a value in clients (for future compatibility with secondary indexes).
  3. Verify that the docs are correct on this and have sufficient examples.

Fix docs for JS run, and add collect to docs

When running in node

r.table('example').run(function(thing){
    console.log(thing);
});

Only one "thing" will be returned, even though there are multiple in the table. When running in the console using runp(), an array of all the records are returned, which is correct. But when using run(), only one record is returned.

eqJoin method not found

Hi,

I just read about rethinkdb on HN, and I downloaded it and tried it out. Seems pretty good so far, and I'm trying out the joins. I tried to run eqJoin, and when I tried to run:

r.table('posts').eqJoin('user_id', r.table('users')).runp()

I got this error:

TypeError: Object [object Function] has no method 'eqJoin'

I have no problems running innerJoin like this:

r.table('posts').innerJoin(r.table('users'), function(post, user){return post('user_id').eq(user('user_id'))}).runp()

Thanks!

Issue when installing rethinkdb driver for Python.

I'm trying to install the drivers for python3.2. When using sudo pip install rethinkdb it gives me the following error. I have both python 2 and 3 installed, I'm not sure if it could be the source of my problems.

Downloading/unpacking rethinkdb
  Running setup.py egg_info for package rethinkdb

Downloading/unpacking protobuf (from rethinkdb)
  Running setup.py egg_info for package protobuf
    Traceback (most recent call last):
      File "<string>", line 16, in <module>
      File "/tmp/pip-build/protobuf/setup.py", line 50
        print "Generating %s..." % output
                               ^
    SyntaxError: invalid syntax
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 16, in <module>

  File "/tmp/pip-build/protobuf/setup.py", line 50

    print "Generating %s..." % output

                            ^

SyntaxError: invalid syntax

----------------------------------------
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build/protobuf
 Storing complete log in /home/ubuntu/.pip/pip.log

Crashes almost immediately

I tried to follow the http://www.rethinkdb.com/docs/guides/quickstart/

~/rethink $ rethinkdb 
info: Loading data from directory rethinkdb_data
error: Inaccessible database file: "rethinkdb_data/metadata": Invalid argument
       Some possible reasons:
       - the database file couldn't be created or opened for reading and writing
       - the database file is located on a filesystem that doesn't support O_DIRECT open flag (e.g. in case when the filesystem is working in journaled mode)
       - user which was used to start the database is not an owner of the file
Crashing while already crashed. Printing error message to stderr.
Segmentation fault from reading the address (nil).Trace/breakpoint trap (core dumped)

Linux Mint 13

$ uname -a
Linux virtualBox-stockholm 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

The directory rethink_db is created, I'm the owner of it and of metadata file.
I'm not sure how to check for O_DIRECT, it's an ext4 local filesystem.

Beyond that I have no idea what's going on.

RethinkDB looks cool and looking forward trying it.

We need an easy way for people to upload core dumps.

They make debugging much easier, and we should make it easy for people (ideally we'd be able to point them at a website with a big "upload" button).

This is a process improvement, but I think it's an important one.

Or if we're bogged down with other things, we could just create an email address like "[email protected]" and ask people to send the core dump there with the issue number as the subject.

Support running without DIRECT IO

Running with direct io makes sense in production, but when developers try the product, they often have encrypted/journaled file systems that don't support direct io. We should implement an alternative code path that opens the files without direct io and warns devs that it's a compatibility mode.

Error in extproc/pool.cc at line 216:

The server shut down unexpectedly with the error:

error: Error in extproc/pool.cc at line 216:

which appears to be just the line:

crash_or_trap("Error on worker process socket");

That was the last message in log_file; let me know if there's any other debug info I can salvage.

r.tableCreate() not defined in JS

So r.table('table_name') is a shortcut for r.db('test').table('table_name')
However, r.tableCreate('new_table') doesn't work in javascript (at least).

Can we have all methods after db() to work after r. by using the default database 'test'?
In that case, in the data explorer, I can suggest users r.table(), r.tableCreate(), r.tableDrop(), r.tableList() etc...

Database corruption (triggers segfault)

I'm getting a crash similar to #33
Except that instead of "Segmentation fault from reading the address (nil).", I have "Segmentation fault from reading the address 0xb0.".
Not sure if it is the same bug or a different one.

Release mode output:

info: Listening for intracluster connections on port 29016.
info: Connected to server "Leshrac" 98a56f75-29ae-43bc-9b18-4302d706faed
info: Listening for client driver connections on port 28016.
info: Listening for administrative HTTP connections on port 8081.
info: Server ready
error: Error in arch/runtime/thread_pool.cc at line 323:
error: Segmentation fault from reading the address 0xb0.
error: Backtrace:
error: Tue Nov 13 10:17:15 2012

       1: ./rethinkdb() [0x97bd2f]
       2: ./rethinkdb() [0x97d0e4]
       3: ./rethinkdb() [0x97b763]
       4: ./rethinkdb() [0x4d5522]
       5: +0xf8f0 at 0x7fbb65e498f0 (/lib/libpthread.so.0)
       6: ./rethinkdb() [0x9349ae]
       7: ./rethinkdb() [0x92e9aa]
       8: ./rethinkdb() [0x909871]
       9: ./rethinkdb() [0x4d3fbc]
error: Exiting.
[2524] worker: Couldn't read job function: end-of-file received
[2524] worker: Failed to accept job, quitting.
[2526] worker: Couldn't read job function: end-of-file received
[2530] worker: Couldn't read job function: end-of-file received
[2526] worker: Failed to accept job, quitting.
[2530] worker: Failed to accept job, quitting.
Segmentation fault

Debug mode output (waayyy more helpful I assume):

info: Our machine ID is e79fe4db-389d-4727-bf68-b86b3ac66eb8
info: Listening for intracluster connections on port 29016.
info: Connected to server "Leshrac" 98a56f75-29ae-43bc-9b18-4302d706faed
info: Listening for client driver connections on port 28016.
info: Listening for administrative HTTP connections on port 8081.
info: Server ready
error: Error in serializer/log/log_serializer.cc at line 416:
error: Assertion failed: [ls_token] 
error: Backtrace:
error: Tue Nov 13 10:26:38 2012

       1: lazy_backtrace_t::lazy_backtrace_t() at backtrace.cc:251
       2: format_backtrace(bool) at backtrace.cc:198
       3: report_fatal_error(char const*, int, char const*, ...) at errors.cc:65
       4: log_serializer_t::block_read(intrusive_ptr_t<ls_block_token_pointee_t> const&, void*, file_account_t*, linux_iocallback_t*) at log_serializer.cc:416
       5: log_serializer_t::block_read(intrusive_ptr_t<ls_block_token_pointee_t> const&, void*, file_account_t*) at log_serializer.cc:392
       6: translator_serializer_t::block_read(intrusive_ptr_t<ls_block_token_pointee_t> const&, void*, file_account_t*) at translator.cc:239
       7: mc_inner_buf_t::load_inner_buf(bool, file_account_t*) at mirrored.cc:166
       8: boost::_mfi::mf2<void, mc_inner_buf_t, bool, file_account_t*>::operator()(mc_inner_buf_t*, bool, file_account_t*) const at mem_fn_template.hpp:275
       9: void boost::_bi::list3<boost::_bi::value<mc_inner_buf_t*>, boost::_bi::value<bool>, boost::_bi::value<file_account_t*> >::operator()<boost::_mfi::mf2<void, mc_inner_buf_t, bool, file_account_t*>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf2<void, mc_inner_buf_t, bool, file_account_t*>&, boost::_bi::list0&, int) at bind.hpp:386
       10: boost::_bi::bind_t<void, boost::_mfi::mf2<void, mc_inner_buf_t, bool, file_account_t*>, boost::_bi::list3<boost::_bi::value<mc_inner_buf_t*>, boost::_bi::value<bool>, boost::_bi::value<file_account_t*> > >::operator()() at bind_template.hpp:21
       11: callable_action_instance_t<boost::_bi::bind_t<void, boost::_mfi::mf2<void, mc_inner_buf_t, bool, file_account_t*>, boost::_bi::list3<boost::_bi::value<mc_inner_buf_t*>, boost::_bi::value<bool>, boost::_bi::value<file_account_t*> > > >::run_action() at runtime_utils.hpp:57
       12: callable_action_wrapper_t::run() at runtime_utils.cc:58
       13: coro_t::run() at coroutines.cc:178
error: Exiting.
Crashing while already crashed. Printing error message to stderr.

The data on which the crash occurs is the result of running the stress client on a sharded (but non-replicated) database for 6 hours. After that I enabled replication and shut down the second server, because I wanted to test outdated reads. Upon restarting the second server, the crash occurred.

To reproduce, please download my RethinkDB data_dirs from here:
http://danielmewes.dnsalias.net/~daniel/.private/cluster_data_segfault.tar.bz2

Then start server 1 (should start up properly):

rethinkdb -d rethinkdb_cluster_data

and server 2 (should crash):

rethinkdb serve -d rethinkdb_cluster_data2 -o 1 -j localhost:29015

Floating point exception (core dumped)

Cool database, guys! (And I believe this is the first time I say this in my life, like "cool database!" ๐Ÿ‘ )

But it crashed within 15 minutes of playing with it :(

Well, basically I've created a table (tv_shows), added a second server, sharded table into two tables. I've started inserting as much data as possible, then I Ctrl+C'ed the second server. Declared it dead from dashboard. Then I started the second server again. It said to stop the "dead" server. I stopped it. And got this:

info: Server ready
info: Connected to server "Riker" de049dca-22f3-4363-8fd0-e470f5e84606
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"00000000-0000-0000-0000-000000000000":1},"ack_expectations":{"00000000-0000-0000-0000-000000000000":1}}}}
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"shards":["[\"\",\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\"]","[\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\",null]"],"primary_pinnings":{"[\"\",\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\"]":null,"[\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\",null]":null},"secondary_pinnings":{"[\"\",\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\"]":[],"[\"S8362cf0e-0472-4e83-b42e-8b1c15c17891\",null]":[]}}}}
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"00000000-0000-0000-0000-000000000000":0},"ack_expectations":{"00000000-0000-0000-0000-000000000000":1}}}}
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"00000000-0000-0000-0000-000000000000":0},"ack_expectations":{"00000000-0000-0000-0000-000000000000":1}}}}
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"shards":["[\"\",null]"],"primary_pinnings":{"[\"\",null]":null},"secondary_pinnings":{"[\"\",null]":[]}}}}
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"00000000-0000-0000-0000-000000000000":1},"ack_expectations":{"00000000-0000-0000-0000-000000000000":1}}}}
info: Disconnected from server "Riker" de049dca-22f3-4363-8fd0-e470f5e84606
info: Deleting /machines/de049dca-22f3-4363-8fd0-e470f5e84606
error: Namespace 2cc90e8a-83af-4280-b5c6-066942ad270c has unsatisfiable goals
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"shards":["[\"\",null]"],"primary_pinnings":{"[\"\",null]":null},"secondary_pinnings":{"[\"\",null]":[]}}}}
error: Namespace 2cc90e8a-83af-4280-b5c6-066942ad270c has unsatisfiable goals
info: Applying data {"datacenters":{"new":{"name":"Main"}}}
error: Namespace 2cc90e8a-83af-4280-b5c6-066942ad270c has unsatisfiable goals
info: Applying data {"machines":{"cf153bff-5cfa-4a11-b992-d3af4b24dd84":{"datacenter_uuid":"827da5e3-f416-42c5-aa86-ecb13ee8be0b"}}}
error: Namespace 2cc90e8a-83af-4280-b5c6-066942ad270c has unsatisfiable goals
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"827da5e3-f416-42c5-aa86-ecb13ee8be0b":0},"ack_expectations":{"827da5e3-f416-42c5-aa86-ecb13ee8be0b":0}}}}
error: Namespace 2cc90e8a-83af-4280-b5c6-066942ad270c has unsatisfiable goals
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"replica_affinities":{"00000000-0000-0000-0000-000000000000":0},"ack_expectations":{"00000000-0000-0000-0000-000000000000":1}}}}

Here I start the second machine and slightly later Ctrl+C'ed it.

info: Connected to server <ghost machine> de049dca-22f3-4363-8fd0-e470f5e84606
info: Disconnected from server <ghost machine> de049dca-22f3-4363-8fd0-e470f5e84606
[3850] worker: Couldn't read job function: end-of-file received
[3850] worker: Failed to accept job, quitting.
[3856] worker: Couldn't read job function: end-of-file received
[3856] worker: Failed to accept job, quitting.
Floating point exception (core dumped)

Still, RethinkDB is awesome!

Python ReQL Python Doc Errors

A few mistakes:

  • The r['attr'] syntax doesn't work like shown in the examples, e.g. r.table('users').filter(r['age'] > 5).run() doesn't work. iirc this was the original motivation behind having R and imports like
import rethinkdb as r
from rethinkdb import R

or the alternative hacky module wrapper.

  • There's scattered uses of orderBy instead of order_by,
  • the grouped_map_reduce example has unbalanced parens on r.branch.
  • && and || should be & and |
  • some examples use let_var instead of letvar

Guarantee failed: [size > 0 && _val[0] == resource_parts_sep_char[0]] resource path must start with a '/' Edit

On a fresh ubuntu 12.04 install (not on a vm). I got a crash

error: Error in http/http.cc at line 37:
error: Guarantee failed: [size > 0 && _val[0] == resource_parts_sep_char[0]] resource path must start with a '/'
error: Backtrace:
error: Sun Nov 11 22:57:10 2012

       1: rethinkdb() [0x515d92]
       2: rethinkdb() [0x512f54]
       3: rethinkdb() [0x5e7e98]
       4: rethinkdb() [0x5ec0fc]
       5: rethinkdb() [0x5ee213]
       6: rethinkdb() [0x5ef732]
       7: rethinkdb() [0x5fa2d5]
       8: rethinkdb() [0x606f6e]
error: Exiting.
Crashing while already crashed. Printing error message to stderr.
Segmentation fault from reading the address (nil).Trace/breakpoint trap

I got tons of warnings before

warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)
warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)
warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)
warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)
warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)
warn: Error in collecting system stats (on demand): Could not open '/proc/net/dev': No such file or directory (errno = 2)

rethinkdb --version returns "rethinkdb"...

I was built from ppa this morning, not from source.

Compiling for maverick 10.10 x86_64

gcc version is gcc version 4.4.5.
Whole error trace is:

CC clustering/administration/cli/admin_cluster_link.cc -o ../build/release/obj/clustering/administration/cli/admin_cluster_link.o
cc1plus: warnings being treated as errors
In file included from ./clustering/administration/logger.hpp:16,
                 from ./clustering/administration/log_transfer.hpp:8,
                 from ./clustering/administration/metadata.hpp:15,
                 from ./clustering/administration/issues/machine_down.hpp:11,
                 from ./clustering/administration/admin_tracker.hpp:15,
                 from ./clustering/administration/cli/admin_cluster_link.hpp:10,
                 from clustering/administration/cli/admin_cluster_link.cc:3:
./rpc/mailbox/typed.hpp: In member function 'void mailbox_t<void(a0_t, a1_t, a2_t, a3_t)>::read_impl_t::read(read_stream_t*) [with arg0_t = int, arg1_t = timespec, arg2_t = timespec, arg3_t = mailbox_addr_t<void(boost::variant<std::vector<log_message_t, std::allocator<log_message_t> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>)>]':
./rpc/mailbox/typed.hpp:411: error: 'arg1.timespec::tv_nsec' may be used uninitialized in this function
./rpc/mailbox/typed.hpp:412: error: 'arg2.timespec::tv_nsec' may be used uninitialized in this function
make: *** [../build/release/obj/clustering/administration/cli/admin_cluster_link.o] Error 1

Add defaults to getattr, and defaults command on streams

Tried to add following document in addition to what is mentioned in
http://www.rethinkdb.com/docs/tutorials/superheroes/ (i.e after Inserting Multiple Documents as well)

.............
heroes.insert({
"hero": "Wolverine",
"name": "James 'Logan' Howlett",
"magazine_titles": ["Amazing Spider-Man vs. Wolverine", "Avengers", "X-MEN Unlimited", "Magneto War", "Prime"],
"age": "26",
"appearances_count": 98
}).run()
.............

i.e added extra field 'age'.

But apparently the filter on heroes.filter({'age': '26'}).run() throws below error:

rethinkdb.net.ExecutionError: Error while executing query on server: Object:
{
"appearances_count": 98,
"hero": "Wolverine",
"name": "James 'Logan' Howlett",
"magazine_titles": ["Amazing Spider-Man vs. Wolverine", "Avengers", "X-MEN Unlimited", "Magneto War", "Prime"],
"id": "fcd1ca53-9c78-4ce1-9b50-3e962cff2220"
}
is missing attribute "age"
db('python_tutorial').table('heroes').filter(((r['age'] == expr('26'))))

I'd expect, being Rethinkdb is schema-less, we could certainly add/remove any fields as required and filter later on.

the server dies when updating a union

r.db('test').table('test').union(r.table('test')).update({}).run()
Version: rethinkdb 1.2.6-4-gf533ac (GCC 4.5.3)
error: Error in rdb_protocol/query_language.cc at line 2748:
error: Unreachable code: eval_call_as_view called on a function that does not return a view
error: Backtrace:
error: Wed Nov 14 14:49:29 2012

       1: ../src/rethinkdb/build/release/rethinkdb() [0x4b53b5]
       2: ../src/rethinkdb/build/release/rethinkdb() [0x8fd6cf]
       3: ../src/rethinkdb/build/release/rethinkdb() [0x87b8cf]
       4: ../src/rethinkdb/build/release/rethinkdb() [0x87b3b8]
       5: ../src/rethinkdb/build/release/rethinkdb() [0x885838]
       6: ../src/rethinkdb/build/release/rethinkdb() [0x88d855]
       7: ../src/rethinkdb/build/release/rethinkdb() [0x8474aa]
       8: ../src/rethinkdb/build/release/rethinkdb() [0x847b2e]
       9: ../src/rethinkdb/build/release/rethinkdb() [0x68746c]
       10: ../src/rethinkdb/build/release/rethinkdb() [0x49f0a8]
error: Exiting.

Another server crash

This is on version 1.2.5-1-0ubuntu1~precise from the ppa.

2012-11-11T23:02:56.330463998 98.447105s error: Sun Nov 11 23:02:56 2012

1: /usr/bin/rethinkdb() [0x515d92]
2: /usr/bin/rethinkdb() [0x512f54]
3: /usr/bin/rethinkdb() [0x44b85f]
4: /usr/bin/rethinkdb() [0x5f3d49]
5: /usr/bin/rethinkdb() [0x608e26]
6: /usr/bin/rethinkdb() [0x60975e]
7: +0x7e9a at 0x7fa338be9e9a (/lib/x86_64-linux-gnu/libpthread.so.0)
8: clone+0x6d at 0x7fa3389174bd (/lib/x86_64-linux-gnu/libc.so.6)

I don't know how to symbolicate it, sorry.

Let me know if this is helpful or spammy.

Add constant time out-of-date count().

We have everything we need to do a constant time out of date count operation in since we store a count of the number of keys present in the btree.

One user on github mentioned that he'd really like to have this feature since he's doing it by hand.

We could do this in 2 ways:
we could do it as an optimization of count that occurs if the thing being counted is a table with out-of-date set to true.

Or we could add an explicit command count_outdated which is only present on table objects.

The second one seems a bit uglier to me although having this as an optimization seems a little bit magical since it's a different meaning of out-dated than anywhere else.

Broken direct I/O support not caught on ReiserFS in journaled data mode

When I run RethinkDB on a ReiserFS file system that is mounted with the "data=journaled" option, I get the following error (debug executable):

info: Creating directory rethinkdb_data
info: Creating a default database for your convenience. (This is because you ran 'rethinkdb' without 'create', 'serve', or '--join', and the directory 'rethinkdb_data' did not already exist.)
error: This doesn't appear to be a RethinkDB data file.
Crashing while already crashed. Printing error message to stderr.
Segmentation fault from reading the address (nil).Trace/breakpoint trap (core dumped)

Weirdly enough, the file seems to open fine with O_DIRECT, but then something goes wrong later. On another ReiserFS partition where data journaling is not enabled, RethinkDB works fine.

The directory rethinkdb_data is actually created, and contains the file log_file of 408 bytes and metadata of 524,288 bytes (in contrast to 2,097,152 bytes that get written on a working partition).

If I start RethinkDB afterwards, using the broken rethinkdb_data, this is what happens:

info: Loading data from directory rethinkdb_data
error: Error in serializer/log/log_serializer.cc at line 361:
error: Assertion failed: [ls_token] 
error: Backtrace:
error: Sat Nov 10 22:03:31 2012

       1: lazy_backtrace_t::lazy_backtrace_t() at backtrace.cc:251
       2: format_backtrace(bool) at backtrace.cc:198
       3: report_fatal_error(char const*, int, char const*, ...) at errors.cc:65
       4: log_serializer_t::block_read(intrusive_ptr_t<ls_block_token_pointee_t> const&, void*, linux_file_account_t*, linux_iocallback_t*) at log_serializer.cc:361
       5: log_serializer_t::block_read(intrusive_ptr_t<ls_block_token_pointee_t> const&, void*, linux_file_account_t*) at log_serializer.cc:337
       6: patch_disk_storage_t::patch_disk_storage_t(mc_cache_t*, unsigned int) at patch_disk_storage.cc:50
       7: mc_cache_t::mc_cache_t(serializer_t*, mirrored_cache_config_t*, perfmon_collection_t*) at mirrored.cc:1330
       8: scc_cache_t<mc_cache_t>::scc_cache_t(serializer_t*, mirrored_cache_config_t*, perfmon_collection_t*) at semantic_checking.tcc:194
       9: metadata_persistence::persistent_file_t::construct_serializer_and_cache(io_backender_t*, bool, std::string const&, perfmon_collection_t*) at persist.cc:249
       10: metadata_persistence::persistent_file_t::persistent_file_t(io_backender_t*, std::string const&, perfmon_collection_t*) at persist.cc:69
       11: run_rethinkdb_porcelain(extproc::spawner_t::info_t*, std::string const&, name_string_t const&, std::vector<host_and_port_t, std::allocator<host_and_port_t> > const&, service_ports_t, linux_io_backend_t, bool*, std::string, bool) at command_line.cc:239
       12: void boost::_bi::list9<boost::_bi::value<extproc::spawner_t::info_t*>, boost::_bi::value<std::string>, boost::_bi::value<name_string_t>, boost::_bi::value<std::vector<host_and_port_t, std::allocator<host_and_port_t> > >, boost::_bi::value<service_ports_t>, boost::_bi::value<linux_io_backend_t>, boost::_bi::value<bool*>, boost::_bi::value<std::string>, boost::_bi::value<bool> >::operator()<void (*)(extproc::spawner_t::info_t*, std::string const&, name_string_t const&, std::vector<host_and_port_t, std::allocator<host_and_port_t> > const&, service_ports_t, linux_io_backend_t, bool*, std::string, bool), boost::_bi::list0>(boost::_bi::type<void>, void (*&)(extproc::spawner_t::info_t*, std::string const&, name_string_t const&, std::vector<host_and_port_t, std::allocator<host_and_port_t> > const&, service_ports_t, linux_io_backend_t, bool*, std::string, bool), boost::_bi::list0&, int) at bind.hpp:820
       13: boost::_bi::bind_t<void, void (*)(extproc::spawner_t::info_t*, std::string const&, name_string_t const&, std::vector<host_and_port_t, std::allocator<host_and_port_t> > const&, service_ports_t, linux_io_backend_t, bool*, std::string, bool), boost::_bi::list9<boost::_bi::value<extproc::spawner_t::info_t*>, boost::_bi::value<std::string>, boost::_bi::value<name_string_t>, boost::_bi::value<std::vector<host_and_port_t, std::allocator<host_and_port_t> > >, boost::_bi::value<service_ports_t>, boost::_bi::value<linux_io_backend_t>, boost::_bi::value<bool*>, boost::_bi::value<std::string>, boost::_bi::value<bool> > >::operator()() at bind_template.hpp:21
       14: boost::detail::function::void_function_obj_invoker0<boost::_bi::bind_t<void, void (*)(extproc::spawner_t::info_t*, std::string const&, name_string_t const&, std::vector<host_and_port_t, std::allocator<host_and_port_t> > const&, service_ports_t, linux_io_backend_t, bool*, std::string, bool), boost::_bi::list9<boost::_bi::value<extproc::spawner_t::info_t*>, boost::_bi::value<std::string>, boost::_bi::value<name_string_t>, boost::_bi::value<std::vector<host_and_port_t, std::allocator<host_and_port_t> > >, boost::_bi::value<service_ports_t>, boost::_bi::value<linux_io_backend_t>, boost::_bi::value<bool*>, boost::_bi::value<std::string>, boost::_bi::value<bool> > >, void>::invoke(boost::detail::function::function_buffer&) at function_template.hpp:154
       15: boost::function0<void>::operator()() const at function_template.hpp:1014
       16: starter_t::run_wrapper(boost::function<void ()()> const&) at runtime.cc:54
       17: boost::_mfi::mf1<void, starter_t, boost::function<void ()()> const&>::operator()(starter_t*, boost::function<void ()()> const&) const at mem_fn_template.hpp:163
       18: void boost::_bi::list2<boost::_bi::value<starter_t*>, boost::_bi::value<boost::function<void ()()> > >::operator()<boost::_mfi::mf1<void, starter_t, boost::function<void ()()> const&>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf1<void, starter_t, boost::function<void ()()> const&>&, boost::_bi::list0&, int) at bind.hpp:307
       19: boost::_bi::bind_t<void, boost::_mfi::mf1<void, starter_t, boost::function<void ()()> const&>, boost::_bi::list2<boost::_bi::value<starter_t*>, boost::_bi::value<boost::function<void ()()> > > >::operator()() at bind_template.hpp:21
       20: boost::detail::function::void_function_obj_invoker0<boost::_bi::bind_t<void, boost::_mfi::mf1<void, starter_t, boost::function<void ()()> const&>, boost::_bi::list2<boost::_bi::value<starter_t*>, boost::_bi::value<boost::function<void ()()> > > >, void>::invoke(boost::detail::function::function_buffer&) at function_template.hpp:154
       21: boost::function0<void>::operator()() const at function_template.hpp:1014
       22: callable_action_instance_t<boost::function<void ()()> >::run_action() at runtime_utils.hpp:57
       23: callable_action_wrapper_t::run() at runtime_utils.cc:58
       24: coro_t::run() at coroutines.cc:178
error: Exiting.
Crashing while already crashed. Printing error message to stderr.
Segmentation fault from reading the address (nil).Trace/breakpoint trap (core dumped)

If I copy the created rethinkdb_data to a different partition without this option, it remains unusable (suggesting that something went wrong during writing, not reading).

This is on Ubuntu 10.04 with the stock kernel: Linux starearth 2.6.32-42-generic #96-Ubuntu SMP Wed Aug 15 19:37:37 UTC 2012 x86_64 GNU/Linux

I suggest that RethinkDB explicitly checks for ReiserFS file systems with data=journaled on startup, and if it detects this setup, emits a warning that it might not operate properly.
(not sure what would happen if I did a remount into journaled mode while RethinkDB is running, but that case seems sufficiently edge-ish to not care about)

Or, as this configuration might be rather rare, add a note about this constellation to the release notes and/or readme for the time being.

Specify that encryptfs doesn't work in the error message

Installed per directions, and ran rethinkdb:

$ rethinkdb 
info: Creating directory rethinkdb_data
info: Creating a default database for your convenience. (This is because you ran 'rethinkdb' without 'create', 'serve', or '--join', and the directory 'rethinkdb_data' did not already exist.)
error: Error in arch/runtime/thread_pool.cc at line 323:
error: Segmentation fault from reading the address (nil).
error: Backtrace:
error: Sat Nov 10 12:38:29 2012

       1: rethinkdb() [0x8d0a82]
       2: rethinkdb() [0x8aceb4]
       3: rethinkdb() [0x483284]
       4: +0xfcb0 at 0x7fe53d8b9cb0 (/lib/x86_64-linux-gnu/libpthread.so.0)
       5: rethinkdb() [0x48804a]
       6: rethinkdb() [0x68cf41]
       7: rethinkdb() [0x692541]
       8: rethinkdb() [0x6b3d37]
       9: rethinkdb() [0x6b52be]
       10: rethinkdb() [0x4813fb]
       11: rethinkdb() [0x481488]
       12: rethinkdb() [0x47f30e]
error: Exiting.
[13082] worker: Couldn't read job function: end-of-file received
[13080] worker: Couldn't read job function: end-of-file received
[13084] worker: Couldn't read job function: end-of-file received
[13082] worker: Failed to accept job, quitting.
[13080] worker: Failed to accept job, quitting.
[13084] worker: Failed to accept job, quitting.
Segmentation fault

This is on Ubuntu 12.04 LTS (GNU/Linux 2.6.32-316-ec2 x86_64)

Seg fault when using a ppp connection

I installed following the instructions on the intro page. On starting rethinkdb I fails with a segfault. The following is the printed on the console. I am trying this on ubuntu 12.10.

info: Loading data from directory rethinkdb_data
error: Error in arch/runtime/thread_pool.cc at line 323:
error: Segmentation fault from reading the address (nil).
error: Backtrace:
error: Sat Nov 10 08:08:43 2012

   1: rethinkdb() [0x504ce1]
   2: rethinkdb() [0x534151]
   3: rethinkdb() [0x530224]
   4: +0xfcb0 at 0x7ffe492a2cb0 (/lib/x86_64-linux-gnu/libpthread.so.0)
   5: rethinkdb() [0x52b76a]
   6: rethinkdb() [0x745742]
   7: rethinkdb() [0x749361]
   8: rethinkdb() [0x757ac7]
   9: rethinkdb() [0x762ad0]
   10: rethinkdb() [0x762b2b]
   11: rethinkdb() [0x5322db]
   12: rethinkdb() [0x532329]
   13: rethinkdb() [0x52d29e]

error: Exiting.
[14663] worker: Couldn't read job function: end-of-file received
[14663] worker: Failed to accept job, quitting.
[14671] worker: Couldn't read job function: end-of-file received
Segmentation fault (core dumped)

Admin interface picks up wrong IP address

After a fresh install on Ubuntu 11.10, I was surprised to see the admin interface (http://localhost:8080/#servers/[server-id]) claim that my server's IP address was 192.168.46.1 (while I was expecting something on the 192.168.1.0/24 subnet). It turns out 192.168.46.1 is the IP address of "vmnet1", a virtual network adapter that VMWare Workstation installed. I guess RethinkDB is using some heuristic to choose which adapter to report.

Not sure yet whether this has any practical consequences, but it was at least surprising.

Implement bug-report command

I propose implementing the following command

$ rethinkdb bug-report

The command should open people's favorite editor, let them write up a bug report, and automatically submit it to github with all the relevant version information. It would then print the url of the github issue for the user.

Couldn't read job function: end-of-file received

This time I started 2 instances (as in quickstart) of RethinkDB. Previously I removed rethink_data2 just in case.

I've sharded tv_shows from dashboard in 2 shards. That went ok.

I've started adding entries from 4 Ruby clients.

Benchmark.measure{ 10000.times { r.table('tv_shows').insert({ 'name'=>'Star Trek TNG' }).run } }

And got this:

info: Connected to server "Riker" 335151a3-3f78-4fae-a1a5-8aba2fb89ec3
info: Applying data {"rdb_namespaces":{"2cc90e8a-83af-4280-b5c6-066942ad270c":{"shards":["[\"\",\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\"]","[\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\",null]"],"primary_pinnings":{"[\"\",\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\"]":null,"[\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\",null]":null},"secondary_pinnings":{"[\"\",\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\"]":[],"[\"S7fbf4705-e328-447c-8512-5e0a6ae71e1f\",null]":[]}}}}
[6016] worker: Couldn't read job function: end-of-file received
Floating point exception (core dumped)

indexdb and sync

ability to do synchronisation with client woudl be compelling.

i have been looking at pouchDB, but i find your RethinkDB compelling

Integrating client / server syc into your architecture woudl be amazing and solve many many problems as so many applications move to html5 and offline client.

Data explorer seems fragile with respect to semicolons

ssutch2 on IRC reported that when he ran:

r.db('lazybeaver').table('log_entries').filter(function(e) { return e('message').eq('session start'); }).run()

in the data explorer it failed, but when he ran:

r.db('lazybeaver').table('log_entries').filter(function(e) { return e('message').eq('session start')}).run()

it worked.

This is a problem because the documentation we give people contains semicolons for things like filter queries (http://www.rethinkdb.com/api/).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.