Giter Club home page Giter Club logo

mongodb's Introduction

This is the Haskell MongoDB driver (client). MongoDB is a free, scalable, fast, document database management system. This driver lets you connect to a MongoDB server, and update and query its data. It also lets you do adminstrative tasks, like create an index or look at performance statistics.

Join the chat at https://gitter.im/mongodb-haskell/mongodb Build Status

Documentation

Dev Environment

It's important for this library to be tested with various versions of mongodb server and with different ghc versions. In order to achieve this we use docker containers and docker-compose. This repository contains two files: docker-compose.yml and reattach.sh.

Docker compose file describes two containers.

One container is for running mongodb server. If you want a different version of mongodb server you need to change the tag of mongo image in the docker-compose.yml. In order to start your mongodb server you need to run:

docker-compose up -d mongodb

In order to stop your containers without loosing the data inside of it:

docker-compose stop mongodb

Restart:

docker-compose start mongodb

If you want to remove the mongodb container and start from scratch then:

docker-compose stop mongodb
docker-compose rm mongodb
docker-compose up -d mongodb

The other container is for compiling your code. By specifying the tag of the image you can change the version of ghc you will be using. If you never started this container then you need:

docker-compose run mongodb-haskell

It will start the container and mount your working directory to /opt/mongodb-haskell If you exit the bash cli then the conainer will stop. In order to reattach to an old stopped container you need to run script reattach.sh. If you run docker-compose run again it will create another container and all work made by cabal will be lost. reattach.sh is a workaround for docker-compose's inability to pick up containers that exited.

When you are done with testing you need to run:

docker-compose stop mongodb

Next time you will need to do:

docker-compose start mongodb
reattach.sh

It will start your stopped container with mongodb server and pick up the stopped container with haskell.

mongodb's People

Contributors

acondolu avatar ajhannan avatar alevy avatar bgianfo avatar csaltos avatar dbalseiro avatar ezyang avatar fujimura avatar fumieval avatar gitter-badger avatar gregwebs avatar horus avatar jaccokrijnen avatar knsd avatar mschristiansen avatar neilco avatar pierremizrahi avatar ralphmorton avatar rrichardson avatar scott-fleischman avatar snoyberg avatar spl avatar srp avatar superbobry avatar tfausak avatar tvh avatar victordenisov avatar why-not-try-calmer avatar wojtnar avatar yuras avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongodb's Issues

Auto reconnect

When performing some maintenance tasks it is common to take the mongo server down and bring it back up. Currently, we have to restart our process for the connection to be reestablished. It would be nice if the library would give the option of auto-reconnecting.

Driver doesn't return all documents from the database

I inserted 7 000 000 documents into a database.testCollection. After that I executed the following program. I received only 97800 docs instead of expected 100 000.

main = do
config <- readConfig

CD.withDB (database config) $ do
cur <- M.find $ (M.select [] "testCollection") {M.limit = 100000}
drainCursor cur

drainCursor :: M.Cursor -> M.Action IO ()
drainCursor cur = do
mDoc <- M.next cur
case mDoc of
Nothing -> return ()
Just doc -> do
liftIO $ putStrLn $ show doc
drainCursor cur

the error of insert

first:

let a = repeat 'a'

.... connect with db
than:

Prelude Database.MongoDB> access  pipe master "local" $ insert "ac" ["txt"=: (take (1024*128) a)]
579dfb81421aa970b600002d
Prelude Database.MongoDB> access  pipe master "local" $ insert "ac" ["txt"=: (take (1024*129) a)]
-- finish until world is end.

I success to insert a string whose length is 1024_128 ,but when I inserted 1024_129 one I did not get the return (about 10mins), and then I passed Ctrl+C

The db is host on the localhost.

SCRAM authentication not working with mongolab

For some reason mongodb on mongolab requires additional message exchange round. The response contains "done":false indicating that the client should send "saslContinue" once more. Here is a patch I applied to make it work, though I'm not familiar enough with SCRAM.

@Feeniks AFAIK you implemented SCRAM support. Any comments?

Build fails with nonce<1.0.5

#89 introduced withGenerator, which was available since nonce-1.0.5. Could we please also update the lower bound on nonce for the already released mongoDB-2.3.0.5 on Hackage? @VictorDenisov I beleive you are the one who has permissions to do that. Thank you.

Handle is closed

This Gist reproduces an error that I've hit in a service at work. It seems like what's happening is that the handle goes bad when the query goes wrong. Is there maybe something wrong with the strictness of the query, and it isn't being evaluated correctly? Or does any exception cause the handle to actually close?

It seems like this makes it hard or impossible to recover gracefully from any kind of "latent" error in a query.

Insert_ not returning

I've found that the insert_ function isn't returning. In the code below the ensureUsers function gets as far querying the database for root, so the pipe is working and it can query mongo, and (in this case) returns nothing. Then, the function just stalls on insert_. I'm using arch linux, mongo v3.4.3, and mongodb v2.3.

import Crypto.Hash
import Database.MongoDB
import qualified Data.Text as Text
import Data.Text(Text)
import Control.Monad.IO.Class

-- |Sends a value to the core document
accessCore :: MonadIO m => Pipe -> String -> Action m a -> m a
accessCore pipe suf =
  access pipe master (Text.pack ("grove-core-" ++ suf))

-- |Default root info
root :: Document
root = [ "username" := String "root"
       , "password" :=
         String (Text.pack (show (hashlazy "entry" :: Digest SHA3_512)))
       ]

-- |Makes sure there is a root user.
-- If there isn't, it is created
ensureUsers :: Pipe -> String -> IO ()
ensureUsers p suf = accessCore p suf
  (findOne (select ["username" := String "root"] "users") >>=
   (\u -> case u of
       Nothing -> insert_ "users" root
       _ -> return ()))

test suite compilation failure

Preprocessing test suite 'test' for mongoDB-2.1.0...
[1 of 1] Compiling Main             ( test/Main.hs, dist/build/test/test-tmp/Main.o )

test/Main.hs:8:1: error:
    Failed to load interface for ‘Spec’
    Use -v to see a list of the files searched for.

gridfs

Are there plans to implement the gridfs part of the driver?

Aggregate support for MongoDB >= 2.6

In MongoDB 2.6, aggregate was changed to return a cursor instead of an array of all results. I created a test (#15) to see if the current implementation of aggregate worked for 2.6, and I was surprised to discover that it did. Is this to be expected? Should a new version of aggregate be created for working with a cursor?

Examples using auth?

I'm getting pretty confused how to authenticate using this library. Admittedly I'm a little new to some of the Haskell concepts, but could you provide an example of how to connect to a database with authentication?

insertAll return number of inserted elements in case of duplicate key error

Hi,

I have a usecase where I would like to use insertAll to insert up to 200 documents. I expect that some of the document will collide with existing unique constraints. I want to know how many of the documents was inserted. Currently I can't see that it's possible, what happens when i use insertAll is that I get this exception:

 Failure (WriteFailure 11000 "E11000 duplicate key error collection: test.hex index: xy dup key: { : 0, : 1 }")

Since I used insertAll instead of insertMany all document that did not collide was inserted, so I get the result I want in the database. But since I got an exception I will not know how many of the documents that was actually updated. If I perform the same query in mongo shell(v 3.0.7):

> db.hex.insert([{"x" : 10, "y" : 20}, {"x" : 10, "y" : 11}], {ordered:false})
BulkWriteResult({
    "writeErrors" : [
        {
            "index" : 0,
            "code" : 11000,
            "errmsg" : "E11000 duplicate key error collection: test.hex index: xy dup key: { : 10.0, : 20.0 }",
            "op" : {
                "_id" : ObjectId("56912454723ef42349551400"),
                "x" : 10,
                "y" : 20
            }
        }
    ],
    "writeConcernErrors" : [ ],
    "nInserted" : 1,
    "nUpserted" : 0,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [ ]
})
> 

I can see that mongodb returned the "nInserted" also for this query. Is it possible to get that value with the current api? As far as I have found its not possible.
Would an extension to the api with this functionality be accepted? And if so do you have any wishes for how the API should look like?

I have two ideas on how to implement this.

  1. Extend the WriteFailure constuctor to also report number of inserted documents. This would however break backwards compatibility so maybe isn't a good solution.
  2. Introduce a new method called something like insertManyEither that return Either [Value] BulkWriteResult

Where BulkWriteResult is type that contains both the Failure and nInserted. And possibly the other paramteters that mongo returns if we want to have a more generic type that could be returned also for other operations.

allCollections always returns an empty list

I am new to Haskell and the mongodb driver, so please apologies if I got sth. wrong, but the allCollections command always returns an empty list for me. I followed the whole tutorial so there should be a collections posts. I can also query this collection and get the data which had been ingested. Nonetheless, it does not appear in the collections list of allCollections

MongoDB Driver

Hi

I've not found any other way to get hold of you.

Thank you for all the work you've been doing on your MongoDB Haskell driver, we're always thrilled to see developers like you in the community. We have recently decided to formally decommission our Haskell driver, and as a recognition of your work, we'd like to appoint you the official maintainer of the MongoDB Haskell driver.

We would like to transfer our Haskell driver repo to you. Even though your fork has since surpassed it, this would give you the full commit history and ownership of the driver. If not, we are happy to keep it in the MongoDB repo and point to your driver in the README.

Welcome to the MongoDB Developer community-- perhaps long overdue at this point. :-) We'll invite you to our MongoDB Driver Developer Google group, which is a place for our community-supported driver devs to get the latest info on specs and development from the Drivers Team at MongoDB.

Please don't hesitate to let me know if you have any questions or concerns.

Warm regards,

Christian Kvalheim

Return update result

Update command returns a pretty sophisticated document detailing the outcome of the command.
These data should be communicated to the user.

SSL Support available?

Hi,
I have only a small question about your current mongodb driver for haskell :)

Is it currently possible with the latest version to connect to a mongodb server, which only accepts SSL connection? (e.g. configured with http://docs.mongodb.org/v2.6/tutorial/configure-ssl/ )

I was planing to experiment a little with your driver and couldn't find anything within the API (hope i didn't forget something). I'm a little afraid of this because i would like to avoid at all cost that the message communication between client/server is done in plaintext :/

If that's the case, keep this as an active issue :) if there is already some approach, just close this ^^.

Otherwise, do you use maybe some workarounds? (some db management applications like robomongo seems to have an option to use an external SSH-Tunnel before the connection is established).

Errors on stack build in ghc-7.10.3

Configuring mongoDB-1.2.0...
Building mongoDB-1.2.0...
Preprocessing library mongoDB-1.2.0...

/tmp/stack8019/mongoDB-1.2.0/System/IO/Pipeline.hs:5:14: Warning:
    -XDoRec is deprecated: use -XRecursiveDo or pragma {-# LANGUAGE RecursiveDo #-} instead

/tmp/stack8019/mongoDB-1.2.0/Database/MongoDB/Admin.hs:36:18:
    Could not find module ‘Data.HashTable’
    Perhaps you meant
      Data.Hashable (needs flag -package-key hashable-1.2.4.0@hasha_8GjadD03dR57AKCJdr90LD)
    Use -v to see a list of the files searched for.

Connecting to MongoDB Atlas

I have migrated from self hosted MongoDB to MongoDB Atlas hosted replica. There seems to be a problem authenticating my Yesod application which throws:

Error handler errored out: InternalError "ConnectionFailure <socket: 22>: hGetBuf: resource vanished (Connection reset by peer)"

Having migrated Ruby and NodeJS application, I believe the problem is the configuration needs to
include:

  • SSL flag to be set to truth.
  • AuthSource to be specified.

MongoDB's official manual at https://docs.mongodb.com/manual/reference/connection-string/ suggests to pass this along as URL's params. But this violates URL parser at readHostPortM in Database.MongoDB.Connection

Read preferences for replica sets

I'm putting this out there because I don't see how to do it right now, but I could easily be missing something.

Currently, we can create a replica set, but, unfortunately, we only get Pipes for the primary and secondary, which means we have to choose one or the other for a connection.

Preferably, we should be able to set a read preference to tell the replica set that we prefer read data from the primary or secondary members of the replica set.

Regex

Trying to understand how to craft a regex query. The following isn't working

find (select ["_id" =: ["$regex" =:  "/sometext/"]] "somecollection") {options = [NoCursorTimeout]}

Edit: Nevermind. I should've followed the types. There's a Regex constructor in Data.Bson

mongoDB-2.1.0.1 does not compile in Stackage Nightly

We're getting the following build error with GHC 8.0.1:

[   96s] [8 of 8] Compiling Database.MongoDB.Transport.Tls ( Database/MongoDB/Transport/Tls.hs, dist/build/Database/MongoDB/Transport/Tls.o )
[   97s] /usr/bin/ghc --make -fbuilding-cabal-package -O -prof -fprof-auto-exported -j8 -osuf p_o -hisuf p_hi -outputdir dist/build -odir dist/build -hidir dist/build -stubdir dist/build -i -idist/build -i. -idist/build/autogen -Idist/build/autogen -Idist/build -optP-include -optPdist/build/autogen/cabal_macros.h -this-unit-id mongoDB-2.1.0.1-GKnJPZVaLlh2ZPJUknq7aW -hide-all-packages -no-user-package-db -package-db dist/package.conf.inplace -package-id array-0.5.1.1 -package-id base-4.9.0.0 -package-id base16-bytestring-0.1.1.6-5dPoF8dzhwzBaEB2MvnmIS -package-id base64-bytestring-1.0.0.1-In9M41tLtcS9QYt3QpGpNY -package-id binary-0.8.3.0 -package-id bson-0.3.2.2-2yCwoS3L5PH8H5A0z3fzmX -package-id bytestring-0.10.8.1 -package-id containers-0.5.7.1 -package-id cryptohash-0.11.9-Jo6G7PdGhbfAJjI1HPDHk3 -package-id data-default-class-0.1.2.0-FYQpjIylblBDctdkHAFeXA -package-id hashtables-1.2.1.0-APwQgiqh5g64tajRd1QpCr -package-id lifted-base-0.2.3.6-2bOvPPa069a4hTIraKwUKB -package-id monad-control-1.0.1.0-HoNEBoNfniX3vjSfkI7WTT -package-id mtl-2.2.1-6qsR1PHUy5lL47Hpoa4jCM -package-id network-2.6.2.1-Li0aefQhyJzUSpQ0fLiXL -package-id nonce-1.0.2-I0MNQ6uP44lGAtrbCVnifP -package-id parsec-3.1.11-3WIMUA3wnqVJ4nTQk5XohJ -package-id random-1.1-54KmMHXjttlERYcr1mvsAe -package-id random-shuffle-0.0.4-LfeDYNPfwrQ2o9p6tw2M9l -package-id text-1.2.2.1-JAnD1x1IHr6H3rdrqlXcyH -package-id tls-1.3.8-Fdsoe0KOumm5FgGf3jDTgy -package-id transformers-base-0.4.4-25SoAegOdaF8rLEnnb5jPI -XHaskell2010 Database.MongoDB Database.MongoDB.Admin Database.MongoDB.Connection Database.MongoDB.Query Database.MongoDB.Transport Database.MongoDB.Transport.Tls Database.MongoDB.Internal.Protocol Database.MongoDB.Internal.Util -Wall -auto-all-exported
[   97s] ghc: unrecognised flag: -auto-all-exported
[   97s] 
[   97s] Usage: For basic information, try the `--help' option.

Cc: @mimi1vx

Thread unsafe connections

Current implementation adds listening thread to every connection to mongodb. However? in multithreaded environments it's neither practical nor convenient to use only one connection or start a listening thread for every connection. Usually it's handled by pools of connections.
The suggestion is to add a connection that doesn't start the listening thread. It will save resources and presumably will be faster. If it doesn't show any performance gain during tests then the feature will be abandoned.

Benchmark build failure

Just started with the newest release.

[4 of 8] Compiling Database.MongoDB.Query ( Database/MongoDB/Query.hs, dist/build/bench/bench-tmp/Database/MongoDB/Query.o )

Database/MongoDB/Query.hs:52:1: error:
    Could not find module ‘Data.Default.Class’
    Use -v to see a list of the files searched for.
   |
52 | import Data.Default.Class (Default(..))
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Deploying to heroku

I'm trying to deploy a web app using this library to heroku, but the problem is I can't make the connection to mongo because of the url format.

Heroku provides urls to connect to in something like this format:

mongodb://user:pass@host:port/path

So a randomised version of that is:

mongodb://herokudwd0k6:[email protected]:23452/heroku_1j2xk6

But this format is not compatible with host or readHostPort since I just get a "server: getAddrInfo: does not exist (Name or service not known)" error with host or a format error with readHostPort. Do you know if there's a way I can connect to the database via a url like the above?

more general/abstract error handling

I expected something like this to work:

test = run $ (fetch $ select [] "ThisDoesNotExist") <|> (fetch $ select [] "thisExists")

Instead, I just get an Exception. Using findOne doesn't work either, since it's always a success.

IMO, the exception should only happen if the last alternative fails. At least that's the expected semantics of (<|>).

Depreciated Network dependency

Hi,

with stackage lts-13.5, this driver is now throwing up warnings of the following form:

In the use of type constructor or class ‘PortID’
   (imported from Database.MongoDB.Connection, but defined in Network):
   Deprecated: "The high level Network interface is no longer supported. Please use Network.Socket."

suggests an impending break...

Lack of test suite

I am not really used to Haskell, so this may be a dummy question: How do I run the test suite of the project, if has any?

I was considering improve the documentation but I would need to make some changes in the code while writing the text to make sure what I am saying is right without asking your revision.

provide a populate/deepPopulate function

This is a very common functionality to dereference an object into the result query.

I was trying a naive implementation along these lines until I hit #43 :

-- |Dereference the object reference at the given label location
-- in the given document.
populate :: (MonadIO m, MonadBaseControl IO m)
         => [Label]
         -> Document
         -> Action m Document
populate [] doc     = return doc
populate (l:ls) doc = do
  cs  <- allCollections
  let val' :: ObjectId
      val' = at l doc
  ds <- forM cs $ \c -> (rest =<< find (select ["_id" =: val'] c))
  case headMay $ concat ds of
       Just newdoc -> populate ls (modifyField newdoc l doc)
       Nothing     -> error "Ouch!"


-- |Searches for a field via a label at a given document and
-- replaces the value with another document.
modifyField :: Document  -- ^ replacement value
            -> Label     -- ^ label used for the search
            -> Document  -- ^ document to search
            -> Document
modifyField _ _ [] = []
modifyField i l (d:ds)
  | label d == l = (l := Doc i) : ds
  | otherwise = d : modifyField i l ds

Apart from the obvious rough edges, I wonder if there is a more efficient way than iterating over all collections in order to find the match to an ObjectId.

deepPopulate would then probably use [[Label]] or so as first argument instead of [Label].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.