Giter Club home page Giter Club logo

mongo-rust-driver-prototype's Introduction

This Repository is NOT a supported MongoDB product

Travis Crates.io docs.rs License

MongoDB Rust Driver Prototype

NOTE: This driver is superseded by the official MongoDB Rust driver, and will no longer be updated.

This branch contains active development on a new driver written for Rust 1.x and MongoDB 3.0.x.

The API and implementation are currently subject to change at any time. You should not use this driver in production as it is still under development and is in no way supported by MongoDB Inc. We absolutely encourage you to experiment with it and provide us feedback on the API, design, and implementation. Bug reports and suggestions for improvements are welcomed, as are pull requests.

Note: This driver currently only supports MongoDB 3.0.x and 3.2.x. This driver is not expected to work with MongoDB 2.6 or any earlier versions. Do not use this driver if you need support for other versions of MongoDB.

Installation

Dependencies

Importing

The driver is available on crates.io. To use the MongoDB driver in your code, add the bson and mongodb packages to your Cargo.toml:

[dependencies]
mongodb = "0.3.11"

Alternately, you can use the MongoDB driver with SSL support. To do this, you must have OpenSSL installed on your system. Then, enable the ssl feature for MongoDB in your Cargo.toml:

[dependencies]
# ...
mongodb = { version = "0.3.11", features = ["ssl"] }

Then, import the bson and driver libraries within your code.

#[macro_use(bson, doc)]
extern crate mongodb;

or with Rust 2018:

extern crate mongodb;
use mongodb::{bson, doc};

Examples

Here's a basic example of driver usage:

use mongodb::{Bson, bson, doc};
use mongodb::{Client, ThreadedClient};
use mongodb::db::ThreadedDatabase;

fn main() {
    let client = Client::connect("localhost", 27017)
        .expect("Failed to initialize standalone client.");

    let coll = client.db("test").collection("movies");

    let doc = doc! {
        "title": "Jaws",
        "array": [ 1, 2, 3 ],
    };

    // Insert document into 'test.movies' collection
    coll.insert_one(doc.clone(), None)
        .ok().expect("Failed to insert document.");

    // Find the document and receive a cursor
    let mut cursor = coll.find(Some(doc.clone()), None)
        .ok().expect("Failed to execute find.");

    let item = cursor.next();

    // cursor.next() returns an Option<Result<Document>>
    match item {
        Some(Ok(doc)) => match doc.get("title") {
            Some(&Bson::String(ref title)) => println!("{}", title),
            _ => panic!("Expected title to be a string!"),
        },
        Some(Err(_)) => panic!("Failed to get next from server!"),
        None => panic!("Server returned no results!"),
    }
}

To connect with SSL, use either ClientOptions::with_ssl or ClientOptions::with_unauthenticated_ssl and then Client::connect_with_options. Afterwards, the client can be used as above (note that the server will have to be configured to accept SSL connections and that you'll have to generate your own keys and certificates):

use mongodb::{Bson, bson, doc};
use mongodb::{Client, ClientOptions, ThreadedClient};
use mongodb::db::ThreadedDatabase;

fn main() {
    // Path to file containing trusted server certificates.
    let ca_file = "path/to/ca.crt";
    // Path to file containing client certificate.
    let certificate = "path/to/client.crt";
    // Path to file containing the client private key.
    let key_file = "path/to/client.key";
    // Whether or not to verify that the server certificate is valid. Unless you're just testing out something locally, this should ALWAYS be true.
    let verify_peer = true;

    let options = ClientOptions::with_ssl(ca_file, certificate, key_file, verify_peer);

    let client = Client::connect_with_options("localhost", 27017, options)
        .expect("Failed to initialize standalone client.");

    let coll = client.db("test").collection("movies");

    let doc = doc! {
        "title": "Jaws",
        "array": [ 1, 2, 3 ],
    };

    // Insert document into 'test.movies' collection
    coll.insert_one(doc.clone(), None)
        .ok().expect("Failed to insert document.");

    ...
}

Testing

The driver test suite is largely composed of integration tests and behavioral unit-tests, relying on the official MongoDB specifications repo.

The easiest way to thoroughly test the driver is to set your fork up with TravisCI. However, if you'd rather test the driver locally, you'll need to setup integration and specification tests.

NOTE: Each integration test uses a unique database/collection to allow tests to be parallelized, and will drop their dependencies before running. However, effects are not cleaned up afterwards.

Setting up integration tests

All integration tests run on the default MongoDB port, 27017. Before running the tests, ensure that a test database is setup to listen on that port.

If you don't have mongodb installed, download and install a version from the MongoDB Download Center. You can see a full list of versions being tested on Travis in the travis config.

After installation, run a MongoDB server on 27017:

mkdir -p ./data/test_db
mongod --dbpath ./data/test_db

Setting up the specifications submodule

Pull in the specifications submodule at tests/json/data/specs.

git submodule update --init

Running Tests

Run tests like a regular Rust program:

cargo test --verbose

mongo-rust-driver-prototype's People

Contributors

10genola avatar alabid avatar amilajack avatar anderspitman avatar athre0z avatar ayosec avatar azasypkin avatar banyan avatar circuitcoder avatar gyscos avatar h2co3 avatar ia0 avatar jblondin avatar kali avatar kyeah avatar laplaceon avatar lilianmoraru avatar loomaclin avatar lukaspustina avatar magiclen avatar newpavlov avatar saghm avatar sbruton avatar thedodd avatar therustmonk avatar vkarpov15 avatar yarn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongo-rust-driver-prototype's Issues

Plans for further development

It seems like there is no active development on this driver at the moment, while that's not clear reading the readme for the project. I feel this situation makes people finding the C driver wrapper (https://github.com/thijsc/mongo-rust-driver) I wrote less likely.

The C driver wrapper is actually usable in production already, if there are no plans to finish this driver it seems like a more promising road forward for now.

Are there plans to turn this into a production ready driver? If not, could you clarify this in the Readme?

Changing name of default branch?

Right now, the default branch is called 1.0. This made sense when we were first writing the driver, since we were writing a new driver for Rust 1.0 as opposed to the earlier version for pre-stable Rust. However, Rust stable is now at 1.7 (and will continue updating with new minor versions every six weeks), so I'm thinking it might be more idiomatic to name it something that would be more version-agnostic (with respect to Rust). I don't have any strong opinion on what that new name should be, but given that master is basically defunct now and we probably aren't going to want to deal with merging 1.0 with it or anything like that, I'm thinking current or something along the lines of that might make more sense.

@vkarpov15 @kyeah any thoughts?

Result::unwrap() called on Err when no server is available

Hi guys,

I'm writing a REST api using the Nickel framework and your (Excellent!) Mongo driver library.
The REST api will be consumed by a mobile application and, in order to provide a stable backend, I'm making sure that all errors return a meaningful message.

To that end, I'm handling the scenario where a connection to mongodb is unavailable in the GET /users request (line 44-50):

#[macro_use] 
extern crate nickel;

extern crate rustc_serialize;

#[macro_use(bson, doc)]
extern crate bson;
extern crate mongodb;

// Nickel
use nickel::{Nickel, JsonBody, HttpRouter, MediaType};
use nickel::status::StatusCode::{self};

// MongoDB
use mongodb::{Client, ThreadedClient};
use mongodb::db::ThreadedDatabase;

// bson
use bson::{Bson};

// rustc_serialize
use rustc_serialize::json::{self,Json, ToJson};

#[derive(RustcDecodable, RustcEncodable)]
struct ApiResponse<T>{
    data: T
}

#[derive(RustcDecodable, RustcEncodable)]
struct ApiError{
    error: String
}


fn main() {

    let mut server = Nickel::new();
    let mut router = Nickel::router();

    router.get("/users",middleware!(|request,mut response|{

        response.set(MediaType::Json);

        let client = match Client::connect("127.0.0.1",27017){
            Ok(c)=> c,
            Err(e)=>{
                //EXPECTED OUTCOME
                response.set(StatusCode::InternalServerError);
                return response.send(json::encode(&ApiError{error: format!("{}",e)}).unwrap());
            }
        };

        let collection = client.db("rust-users").collection("users");

        let mut json_users : Vec<Json> = vec![];
        let mut cursor = collection.find(None, None).unwrap();

        for result in cursor{
            match result {
                Ok(doc)=>{
                    let json_user = Bson::Document(doc).to_json();
                    json_users.push(json_user)
                },
                Err(_)=>continue,
            }
        }

        (StatusCode::Ok,json::encode(&ApiResponse{data: json_users}).unwrap())

    }));

    server.utilize(router);
    server.listen("127.0.0.1:6767");
}

In order to test this, I deactivated the mongodb server and sent a GET request.
The application panicked producing the following message:

thread '' panicked at 'called Result::unwrap() on an Err value: OperationError("No servers available for the provided ReadPreference.")', ../src/libcore/result.rs:741

I do not believe this is correct behavior. The error should propagate up to the final result where I can handle it according to my needs.

Make project (partially) buildable with rustpkg

Renaming bson/bson.rc and libmongo/mongo.rc each to a respective lib.rs allows each of them to be built with rustpkg. For example, from the home directory of the project executing $ rustpkg build bson will create a directory ./build/bson and place the resulting library files there.

The resulting directory structure will be different from what we have now. Make would be able to call on rustpkg instead of directly to rustc. Users would not be able to supply their own flags for building.

I'm not 100% sure this is what we want; although rustpkg is included as part of the Rust distribution, it is used somewhat infrequently (though it is not totally abandoned). As far as I know there are no plans for a major rewrite of rustpkg but it is certainly not out of the question. Possibly the biggest downside is the fact that the libmongo crate has an external dependency (md5.c) which precludes users from simply running $ rustpkg build libmongo; they first have to compile the MD5 code using make.

Avoid redundancy in formattable! macro

Ideally, formattable! would publish two options:

formattable! {
  struct Foo {
     asdf: bool
  }
}

This would define Foo, (maybe?) build a default Foo::new(), and implement BsonFormattable. This requires all of the members of Foo to implement Zero (or Default if that ever lands). All the fields of the defined struct are public and have no documentation attributes.

formattable! {
  impl Foo {
    asdf: bool
  }
}

This works identically to the current implementation: Foo is an existing struct and the Foo::new() method already exists.

Right now, the struct implementation is blocked on Rust not allowing more than one item to be defined in a macro (items 2 through โˆž are silently ignored).

Move mocking code into its own package

Currently blocked on Rust being unable to export macros outside of a given crate.

Right now, the code defined in mock and mockable are bound to the libmongo crate, though they could certainly stand on their own. They should be moved once macros can be exported beyond #[macro_escape].

libmongo/test/indices randomly fails

While running the libmongo tests, indices failed once at an unwrap call. I have not been able to duplicate this but I suspect it is something weird with the test runner. A couple of tests print something to stdout and that output is not always shown, so the test runner seems bugged out for some reason.

Driver hangs when calling `collection_names`

The following code hangs on my machine (Windows 10 running MongoDB 3.0.9):

fn main() {
    let client = Client::connect("localhost", 27017).ok()
        .expect("Failed to initialize standalone client.");
    let db = client.db("food_manager_test");
    let names = db.collection_names(None).expect("Error reading collection names");

Am I using the library incorrectly or is it just a bug?

authentication tests fail

The authentication tests are failing---it appears that the BSON parser is having trouble somewhere, but there's little indication where. Fortunately, very reproducible.

Example of using the connection pool

This isn't an issue per se, more of a request. I'm working on an api using mongo and this library and as of now, I've a connection instantiated when the server starts. Every time a request comes in (as of now all requests communicate w/ mongo), the connection is cloned. Is this the correct way to do this? Should I be using a mongodb::pool::ConnectionPool instead? If so, I'm not sure how to query using a PooledStream.

Thanks!

Decode f64 from ~[u8]

In order to produce Double(f64) variants when decoding, ~[u8] values need to be converted to doubles. There currently is no to_bytes() or from_bytes implementation for any float type in rust's libraries and even unsafe typecasting has so far not worked out.

At some point I may end up calling to a C function to do this computation since doing IEEE 754 conversion in Rust has proven to be fairly painful.

GridFS `put_get` test failing on MongoDB 3.2

Travis seems to have reported an issue with the test v3_2::client::gridfs::put_get. I haven't been able to replicate it (i.e. all of the 3.2 tests seem to be working locally for me), but that may be because I have version 3.2.3 installed rather than 3.2.4 (Archlinux hasn't updated to the latest release, it seems; I'll download it manually shortly to see if I can replicate the issue with it).

@kyeah, I don't suppose you have any idea what's causing this? I'm not too familiar with the GridFS parts of the code, so any advice/expertise would be much appreciated!

Rename 'libmongo' folder

Regardless of what we opt to do for #11, I think it would make sense to move src/libmongo to a different name. The lib prefix isn't required with the crate structure we're using so we could move it to src/mongo or something less general like src/driver.

Client::connect does not error as I would expect

Hello, first of all thank you for your work on a pure rust mongodb driver.

While I was experimenting I noticed that the client connect Err match does not execute when a server is down. The program didn't panic until the client was used later and then responded with: thread '

' panicked at 'Failed to insert document becuase: No servers available for the provided ReadPreference.

This should have picked up that there was no mongod server running
let client = match Client::connect("localhost", 27017) {
Ok(c) => c,
Err(e) => {
panic!("Failed to initialize standalone client: {}", e);
},
};

However it continued to later panic here
match coll.insert_one(doc.clone(), None) {
Ok(_) => println!("Inserted document:"),
Err(err) => panic!("Failed to insert document becuase: {}", err),
}

Just an observation. Again it's working great so far.

Documents converted from JSON may have their keys unordered

Using extra::json::from_str will place key/val pairs in a std::hashmap::Hashmap, which is unordered. The only way around this might be to define our own JSON parsing function, replacing from_str and using our internal OrderedHashmap implementation (or LinkedHashmap, whenever it makes its way into libextra).

Question on how to use update_one function

Hi,

I am having problem with update_one function. I want store time series data in MongoDB and, as suggested in many tutorials, I want to store one minute per document and each second I would like to store in a vector field the measurement. My question is: how can I update the vector inside the document? How can I add each second a new measurement value?

Thanks in advance guys.

Do not require GC as much as feasible

If #15 comes to fruition it would be great if we could basically remove the need for GC. Given the relationship between mongo::client and mongo::db I'm still not 100% sure it's possible but we could certainly remove most of it.

Relicense under dual MIT/Apache-2.0

This issue was automatically generated. Feel free to close without ceremony if
you do not agree with re-licensing or if it is not possible for other reasons.
Respond to @cmr with any questions or concerns, or pop over to
#rust-offtopic on IRC to discuss.

You're receiving this because someone (perhaps the project maintainer)
published a crates.io package with the license as "MIT" xor "Apache-2.0" and
the repository field pointing here.

TL;DR the Rust ecosystem is largely Apache-2.0. Being available under that
license is good for interoperation. The MIT license as an add-on can be nice
for GPLv2 projects to use your code.

Why?

The MIT license requires reproducing countless copies of the same copyright
header with different names in the copyright field, for every MIT library in
use. The Apache license does not have this drawback. However, this is not the
primary motivation for me creating these issues. The Apache license also has
protections from patent trolls and an explicit contribution licensing clause.
However, the Apache license is incompatible with GPLv2. This is why Rust is
dual-licensed as MIT/Apache (the "primary" license being Apache, MIT only for
GPLv2 compat), and doing so would be wise for this project. This also makes
this crate suitable for inclusion and unrestricted sharing in the Rust
standard distribution and other projects using dual MIT/Apache, such as my
personal ulterior motive, the Robigalia project.

Some ask, "Does this really apply to binary redistributions? Does MIT really
require reproducing the whole thing?" I'm not a lawyer, and I can't give legal
advice, but some Google Android apps include open source attributions using
this interpretation. Others also agree with
it
.
But, again, the copyright notice redistribution is not the primary motivation
for the dual-licensing. It's stronger protections to licensees and better
interoperation with the wider Rust ecosystem.

How?

To do this, get explicit approval from each contributor of copyrightable work
(as not all contributions qualify for copyright, due to not being a "creative
work", e.g. a typo fix) and then add the following to your README:

## License

Licensed under either of

 * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
 * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

and in your license headers, if you have them, use the following boilerplate
(based on that used in Rust):

// Copyright 2016 mongo-rust-driver-prototype Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.

It's commonly asked whether license headers are required. I'm not comfortable
making an official recommendation either way, but the Apache license
recommends it in their appendix on how to use the license.

Be sure to add the relevant LICENSE-{MIT,APACHE} files. You can copy these
from the Rust repo for a plain-text
version.

And don't forget to update the license metadata in your Cargo.toml to:

license = "MIT OR Apache-2.0"

I'll be going through projects which agree to be relicensed and have approval
by the necessary contributors and doing this changes, so feel free to leave
the heavy lifting to me!

Contributor checkoff

To agree to relicensing, comment with :

I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.

Or, if you're a contributor, you can check the box in this repo next to your
name. My scripts will pick this exact phrase up and check your checkbox, but
I'll come through and manually review this issue later as well.

Could not find Cargo.toml

Here is my Cargo.toml

[package]
name = "pipe"
version = "0.1.0"
authors = ["Igor"]

[dependencies.bson]
git = "https://github.com/zonyitoo/bson-rs"

[dependencies.mongodb]
git = "https://github.com/mongodbinc-interns/mongo-rust-driver-prototype"

then I do 'cargo build' and receive:

Unable to update https://github.com/mongodbinc-interns/mongo-rust-driver-prototype

Caused by:
  Could not find Cargo.toml in `/Users/igor/.cargo/git/checkouts/mongo-rust-driver-prototype-760be1e4122dc7eb/master`

New release?

The systems I'm working on have been updated to 3.2, so I need to update my projects as well.

I noticed a PR was merged last month for 3.2 support. Is there a chance we can get an update pushed out to crates.io?

Thanks for all your work!!

Multithreading support

The first attempt at multithreading support was made in #87, but the solution of aliasing Arc types and then adding traits is rather messy. There should be a cleaner solution implemented at some point to avoid forcing users to import both the alias types and the bulky traits.

Replace extra::net::tcp with std::rt::io::net::tcp

extra::net::tcp is planning on being completely removed once the new IO is finished. mongo::conn will need to be redone with reference to std::rt::io::net::tcp instead. Unfortunately, the two libraries have different APIs and different semantics so this is far from a find/replace change.

How to convert mongo bson objectId to a string

Apologies if this is the wrong place to ask, as its not really an issue with a code itself, but rather a failure of me understanding the documentation. I'm working on a tool to sync a collection of mongo documents to a postgres database. Here is the code in its entirety: http://is.gd/QYzkBD

This works, except the existing document ids get stored in the postgres text column as the string ObjectId("566740710ed3bc0a8f000001"), and I need them as just the hex part. I've read the docs and the mongo driver source, but I can't figure out how to convert it to that. It looks like ObjectId had a to_hex function which is what I think I need, but the result of .get("_id") is of Bson type, so I get error: no method namedto_hexfound for type&bson::bson::Bsonin the current scope

I'm still unfamiliar enough with rust to know how to convert from Bson to the underlying ObjectId, or if there is a better approach that I'm missing. Thanks!

MD5 has entered libstd

At our next version update we'll be able to remove our C implementation of MD5. Very exciting.

Speed up JSON parsing

As long as we are stuck with a separate JSON parser, we should try and make it nicer.

BSON serialization actually benchmarks surprisingly fast, so I'm not really on the lookout for improvements there. My guess is that the usage and implementation of tools/stream.rs are suboptimal.

Generate ObjectIds

There should be a function called statically from Document to generate a new ObjectId in the correct format.

Blocked on #20.

~40ms find_one function on small collection

I'm working on a web api and having been trying to diagnose slow performance issues. Some of my endpoints take 400-500ms without doing much of anything. I put some timers in my code, and found that mongodb calls were the slow ones. I then decided to create a very slimmed down version of what I was doing to showcase the slowness in an issue. I'm new to mongodb so I may be doing something which isn't recommended. Below, I'll copy my Cargo.toml, main.rs, and output. I'm using rust nightly from Mar. 8th 2016 since there are still build issues with running everything at latest including latest nightly.

Cargo.toml dependencies:

serde = "0.6"
serde_macros = "0.6"

mongodb = "0.1"
bson = "0.1"

main.rs:

#![feature(custom_derive, plugin)]
#![plugin(serde_macros)]

#[macro_use(bson, doc)]
extern crate bson;
extern crate mongodb;
extern crate serde;
extern crate chrono;

use bson::{Document, Bson};
use mongodb::{Client, ThreadedClient};
use mongodb::db::{Database, ThreadedDatabase};
use mongodb::coll::Collection;
use serde::{Serialize, Deserialize};
use chrono::*;

#[derive(Serialize, Deserialize, Debug)]
struct MyDocument {
    #[serde(rename="_id")]
    pub an_int: usize,
    pub a_string: String,
    pub a_list_ints: Vec<usize>
}


fn main() {

    let client = Client::connect("localhost", 27017)
        .ok().expect("Failed to initialize standalone client.");

    clear_db(&client.db("test"));

    let coll = client.db("test").collection("documents");

    initialize_data(&coll);

    let item = find_by_an_int(&coll, 5);

    println!("Retrieved Item: {:?}", item);

    find_by_an_int(&coll, 2);
    find_by_an_int(&coll, 1);
    find_by_an_int(&coll, 3);
}

fn clear_db(db: &Database) {
    db.drop_database().ok().expect("Failed to drop database");
}

fn initialize_data(coll: &Collection) {
    let sample_datas = vec![
        MyDocument { an_int: 1, a_string: "Hello".to_string(), a_list_ints: vec![1,2,3] },
        MyDocument { an_int: 2, a_string: "Hello".to_string(), a_list_ints: vec![1,2,3] },
        MyDocument { an_int: 3, a_string: "Hello".to_string(), a_list_ints: vec![1,2,3] },
        MyDocument { an_int: 4, a_string: "Hello".to_string(), a_list_ints: vec![1,2,3] },
        MyDocument { an_int: 5, a_string: "Hello".to_string(), a_list_ints: vec![1,2,3] }
    ];

    let sample_docs = sample_datas.iter().map(|data| dao_to_doc(data)).collect::<Vec<_>>();

    coll.insert_many(sample_docs, None).ok().expect("Failed to initialize data");
}

fn find_by_an_int(coll: &Collection, an_int: usize) -> MyDocument {
    let filter = doc! { "_id" => (an_int as i32) };

    let utc_before_find = UTC::now();
    let find_one_doc = coll.find_one(Some(filter), None).ok().expect("Failed to find_one").unwrap();
    let utc_after_find = UTC::now();

    println!("ms elapsed for find: {:?}", (utc_after_find - utc_before_find).num_milliseconds());

    doc_to_dao(find_one_doc)
}

fn dao_to_doc<T: Serialize>(item: &T) -> ::bson::Document {
    let bson = ::bson::to_bson(item).ok().expect("Couldn't serialize to doc");
    match bson {
        Bson::Document(d) => d,
        _ => panic!("Couldn't serialize to doc")
    }
}

fn doc_to_dao<T: Deserialize>(doc: Document) -> T {
    ::bson::from_bson(Bson::Document(doc.clone())).ok().expect("Couldn't deserialize to struct")
}

output (using --release flag):

ms elapsed for find: 0
Retrieved Item: MyDocument { an_int: 5, a_string: "Hello", a_list_ints: [1, 2, 3] }
ms elapsed for find: 38
ms elapsed for find: 40
ms elapsed for find: 39

As you can see, find_one is taking around 40ms. If I make 3 or 4 mongodb calls per request, this really adds up. I was hoping my whole api could operate with elapsed times of under 20ms for most endpoints.

Is the library just slow? Is this the latency I should expect with mongo? Can I do something to improve the time?

EDIT: I also find it interesting that once in a while, calls take 0ms:

ms elapsed for find: 0

Only one document parsed in reponse from server

As of #41 in the 1.0 branch, Message::read_reply only parses one document from the server. This will be fixed in the future when we get around to implementing cursors.

Because of this, making multiple queries in a row may not get the correct results (again, just in the 1.0 branch) if the first query returns more than one document, so if you're testing this out and get a weird issue when making multiple queries, try limiting the first query (or any query besides the last) to just one document.

Implement authentication

Master currently has some basic code for doing authentication. It is blocked on two major components:

  • Get an MD5 implementation. There are OpenSSL bindings at https://github.com/kballard/rustcrypto. May take some effort to get appropriately packaged though.
  • Have run_command return an appropriate value on success. Right now a successful command always returns (), but the authenticate method needs to see the actual result of { getnonce: 1 }.

Fix issues caused by 0.7

0.7 has introduced a few significant changes:

  • += is no longer available. While this is temporary, it means any managed vector (@[T]) is hamstrung for the moment. The only place where I know this was present was in mockable.rs which is fine without it, but if any other code has this type it will need to be updated.
    • Oddly, &= and |= seem to still be available, despite what the release notes indicate.
  • Some types, notably &fn, are no longer copyable. It looks like the best way around this is to just inline them whenever a problem arises.

bson library has released v0.2.0

I see the bson lib has released v0.2.0 and I can't use the latest versions of it or Serde without this mongo lib upgrading to the newest bson, I believe. Mongo is currently using bson 0.1.4.

New Github Pages site

Because of the transition of the repo from the interns group to the MongoDB Labs group, it looks like our Github Pages website with the documentation isn't up anymore. We should get the new one up and running (and then change the link to the new URL).

Valgrind reports memory leaks

I'm not sure how memory leaks could develop but the BSON library alone leaks about 200K when running all the tests. I'm looking into what could cause this.

Aggregate utility objects/functions into a single crate

We have a lot of general-purpose code scattered around, bson::stream, bson::ord_hash, mongo::util and mongo::db::md5. In particular, if we are going to do ObjectId generation then we need MD5 to be available to the BSON library.

Encoding/decoding

Is there a way to encode/decode a struct to bson (or json to bson) for documents? I do know bson uses rustc serialize but I do not see any test or examples to perform such a task to make sure its done right (if its possible)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.