Giter Club home page Giter Club logo

storm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storm's Issues

Index types should be moved to its own package

I had a look at the generated GoDoc, and it doesn't look bad, but the Index types clutters the API (Index, ListIndex, UniqueIndex).

I may be missing something, but shouldn't these be undexported? Are they of real value outside of the storm package?

Add support for nested buckets

BoltDB supports nested buckets, but it's not implemented in Storm.

As to "why do we need this?":

Many database applications have natural partitions, where it is typically one partition in use at a time.

With GitHub as an example: A user has many repositories, but looks mostly at a single repository at a time, with its:

  • Code (Git repo)
  • Issues
  • Pull requests
  • Settings
  • ...

In the world of relational databases this is solved with where clauses and joins.

Translated to the BoltDB world we could:

  1. Filter all your entities by some repositoryID, which gets messy, fast ... and how do we delete a repository?
  2. Store all in one big object graph, which makes filtering easier, but doesn't scale.
  3. Partition the application into buckets.

Storm currently stores everything below the root bucket with a bucket name derived from the struct name and package. This is perfectly fine for many applications.

Suggestion: New bucket tag that points to the parent bucket:

type Issue struct {
    ParentBucket      [][]string `storm:"container"`
}
  • This would work fine with Save operations.
  • But would not work with Find and One etc.

This could be solved by either

  • Adding a variadic (optional) parentBucket ...string to these methods:
func (s *DB)  Find(fieldName string, value interface{}, to interface{}, parentBucket ...string) error {
    //
}
  • Adding a new concept of a container/bucket and a way to switch between them, so only the method receiver changes:
func (c *Container)  Find(fieldName string, value interface{}, to interface{}) error {
    //
}
  • ...?

Not sure what is best/simplest, but it should good enough as a foundation for discussion.

Some additional integration/helpers would be nice, but the above is a start.

Drop func should take something other than a string

I started using the Drop func today, and the API felt a little weird.

func (n *Node) Save(data interface{}) error
func (n *Node) Drop(bucketName string) error

The first one creates a bucket if it doesn't exist. I don't really know what the bucket name is, and I need it if I want to drop it. I really don't care either. I guess the struct User will end up in the bucket "User", but now I have a loosely typed code, not very refactor-friendly.

Batch support

Batch writes can be very important when dealing with concurrency.
Passing an option to Save can be a good solution to activate the Batchinstead of the Update for specific writes.
The option must be ignored if already in transaction though

If bucket does not exist, Node.One() should return ErrNotFound

If a bucket does not yet exist, Node.One() currently fails with:
fmt.Errorf("bucket %s doesn't exist", info.Name)

This happens if for example you try to look up an object by ID but no objects of that type have been saved yet. It is much more usable to simply return ErrNotFound. Alternatively, fail with an error that can be easily checked, rather than the generic fmt.Errorf(...)

write performance

hi,
is any way to speed up this?
https://play.golang.org/p/LYVmsy3ysD i need to insert 700 000 rows

Mindaugass-MacBook-Pro:epg_bolt mindaugas$ time go run main.go --config config.ini
parsing xml
storm begins
num: 0, time: 2.85197ms
num: 1000, time: 2.026768464s
num: 2000, time: 6.523655515s
num: 3000, time: 9.663367105s
num: 4000, time: 11.105321635s
num: 5000, time: 11.937392684s
num: 6000, time: 14.066639588s
num: 7000, time: 17.177086791s
num: 8000, time: 21.325859315s
num: 9000, time: 20.814317701s
num: 10000, time: 25.686991882s
num: 11000, time: 25.98401264s
num: 12000, time: 29.643677727s
num: 13000, time: 38.521215154s

toBytes should consider Codec.Encode

It may be confusing to some (including me) that the keys are encoded with some ... other encoder (gob). Esp. if you want to interact directly with the BoltDB.

This would potentially be a breaking change, though.

[Feature Request] Update function on existing structures

I don't know how you implement an update on an existing structures guys, but i think we can all agree that an update function for an existing structure would be awesome.

For now, it's fetch, copy, delete in db, edit copied, save for me.

how do you guys do updates on existing structures?

sort

mhz, you wrote that indexed field is sorted but inresult i get unsorted
db.From("tv3").Select(q.Gte("Timestamp", 1472981431)).Find(&videos)
i get
15 {ID:25 Timestamp:1472982298 Channel:tv3 Filename:stream-1472982298.ts Dirname:2016/09/04/12 Duration:10}
16 {ID:26 Timestamp:1472982356 Channel:tv3 Filename:stream-1472982356.ts Dirname:2016/09/04/12 Duration:10}
17 {ID:27 Timestamp:1472982365 Channel:tv3 Filename:stream-1472982365.ts Dirname:2016/09/04/12 Duration:10}
18 {ID:9 Timestamp:1472981432 Channel:tv3 Filename:stream-1472981432.ts Dirname:2016/09/04/12 Duration:10}

OrderBy

Implement OrderBy(fieldName string) for Find, All, AllByIndex, Range and Select using a tree before inserting the value in the slice

Should there be an easy batch?

If I'm not mistaken, don't you have to call ".Batch" instead of ".Update" if you want to batch calls together? This would be pretty easy to implement, but I'm unsure if it is necessary (Do update calls already get batched together?)

Just wondering, I like this code though, I was half way through writing some of these features myself, but this saved me some work for sure.

LastInsertId() functionality for AutoIncrement()

Just an idea, but when using AutoIncrement() it would be useful to have a function similar to LastInserId() in the sql package:

https://golang.org/pkg/database/sql/#Result

    // LastInsertId returns the integer generated by the database
    // in response to a command. Typically this will be from an
    // "auto increment" column when inserting a new row. Not all
    // databases support this feature, and the syntax of such
    // statements varies.

In a transaction you can get the new ID and use it for other stuff. Right now it's not straight forward to get the id of an item that has been inserted. I assume this could be returned from Bolt's NextSequence() or similar?

Support Sereal versioning

I notice Sereal versions their codec, current at version 3.

I suggest we make sure that the current /sereal package goes against version 3 (use the NewEncoderV3etc.) -- and if there is any interesting development in this area in the future, add /sereal4 or whatever.

Marshal/Encode vs Unmarshal/Decode

We currently have:

type EncodeDecoder interface {
    Encode(v interface{}) ([]byte, error)
    Decode(b []byte, v interface{}) error
}

Which in the different implementations delegates to Marshal/Unmarshal or Encode/Decode (gob).

Go's stdlib have a distinction between Marsha (convert to []byte) and Encode (stream to an io.Writer).

It is all put below the package encoding so i think Storm´s Codec is a good name for it. The interface, however, could maybe be renamed to:

type Marshaler interface {
    Marshal(v interface{}) ([]byte, error)
    Unmarshal(b []byte, v interface{}) error
}

This isn't really of world-class importance, but would maybe make the API a little bit more familiar to the common Gopher.

No need to index the Primary Key

An index for the Primary Key is redundant, it just adds a read op and some storage space, as the PKs are readily available in the top level bucket.

Finding non-indexed fields

Storm should be able to find non-indexed fields.
Indexes are there to speed up the requests but when wanting to find a non indexed field you get stuck by the limitations of Storm.

Do not mix value and pointer receivers

See transaction.go line 4, Begin() takes a value receiver but all other methods take a pointer receiver.
https://github.com/asdine/storm/blob/master/transaction.go#L4
The reason is this:
http://dave.cheney.net/2015/11/18/wednesday-pop-quiz-spot-the-race

The solution is simple, once you have a pointer receiver, make all methods on a type pointer receivers.

Not sure whether there's some other code in storm that does the same thing, Node.Begin() is what I ran into so far.

How to access the NextSequence in a Node.

BoltDB provides a very useful NextSequence method on every bucket that returns an autoincrementing integer for the bucket. Since Storm does not provide direct access to the bucket (Its encapsulated in a Node), how does one access the NextSequence?

One way to do it would be to add a NextSequence method on the Node. Would you be open to that?

Empty index buckets are not deleted

Just toying with the example from the README, I noticed that as you save (update) the same user but with a different CreatedAt time stamp, multiple buckets inside __storm_index_CreatedAt are created but (n-1) of them are empty. Not sure if it matters with boltdb in terms of performance, but it would be nice to delete an index bucket if empty.

View from boltbrowser:

  - User                                          | Path: User/__storm_index_CreatedAt/ ��    Time �
    - __storm_index_CreatedAt                     |       �    ��       ��d�)��`�\
      -  ��    Time ��    ��       ��c� �Q��\     | Buckets: 0
      -  ��    Time ��    ��       ��c� �I��\     | Pairs: 1
      -  ��    Time ��    ��       ��d� ��1�\     |
      -  ��    Time ��    ��       ��d�$����\     |
      -  ��    Time ��    ��       ��d�)��`�\     |
        10: 10                                    |
      - storm__ids                                |
        10:  ��    Time ��    ��       ��d�)��`�\ |
    - __storm_index_Email                         |
      [email protected]: 10                       |
    - __storm_index_Group                         |
      - staff                                     |
        10: 10                                    |
      - storm__ids                                |
        10: staff                                 |
    10:                                           |
      E��    User ��     ID     Group     Email   |
          Name     CreatedAt ��    ��    Time ��  |
         8��  10  staff  [email protected]  John  |
             ��d�)��`�\

New Encode-After Auto-Increment Copies object on save

Instead of saving an object with a set ID over an old object with that same ID, it copies it (but leaves the ID in the json alone) So I'm getting two objects with the same ID. After I generate the second object, I can update that one just fine, but the first one is effectively permanent.

I've also tried setting the ID field to unique and a couple other options. All I get is an "Already Exists" error.

DB.Save() panics if struct with embedded struct is passed by value

type Base struct {
    Ident string `storm:"id"`
}

type User struct {
    Base `storm:"inline"`
    Group     string `storm:"index"`
    Email     string `storm:"unique"`
    Name      string
    CreatedAt time.Time `storm:"index"`
}

func main() {
    db, err := storm.Open("my.db")
    if err != nil {
        log.Fatal("db open:", err)
    }
    defer db.Close()

    user := User{
        Base:      Base{Ident: "10"},
        Group:     "staff",
        Email:     "[email protected]",
        Name:      "John",
        CreatedAt: time.Now(),
    }

    if err = db.Save(user); err != nil {
        log.Fatal("save user:", err)
    }
panic: reflect.Value.Addr of unaddressable value

goroutine 1 [running]:
panic(0x1275e0, 0xc8200687c0)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:481 +0x3e6
reflect.Value.Addr(0x15cd40, 0xc8200a2120, 0x99, 0x0, 0x0, 0x0)
    /usr/local/Cellar/go/1.6.2/libexec/src/reflect/value.go:240 +0x8c
github.com/asdine/storm.extractField(0xc82008ea80, 0xc820094070, 0xc820093140, 0x0, 0x0, 0x0)
    /Users/x/code/go/src/github.com/asdine/storm/extract.go:150 +0xa99
github.com/asdine/storm.extract(0x1724e0, 0xc8200a2120, 0x0, 0x0, 0x0, 0xc81fffaef9, 0x0, 0x0)
    /Users/x/code/go/src/github.com/asdine/storm/extract.go:102 +0xa37
github.com/asdine/storm.(*Node).Save(0xc82006c8a0, 0x1724e0, 0xc8200a2120, 0x0, 0x0)
    /Users/x/code/go/src/github.com/asdine/storm/save.go:12 +0x63
github.com/asdine/storm.(*DB).Save(0xc8200a2000, 0x1724e0, 0xc8200a2120, 0x0, 0x0)
    /Users/x/code/go/src/github.com/asdine/storm/save.go:108 +0x45

Note db.Save(user) rather than db.Save(&user) above, an easy mistake to make, if it's even considered a mistake.

DB.Save() should probably check and only accept pointers to structs to avoid these cases.
Alternatively extractField() should probably check CanAddr() first and fail with a more descriptive error.

Performance, decomposing requests and code generation

Storm's primary goal is to be simple to use.

It has never been about achieving good performance, that's why reflection is heavily used.
I am not a fan of reflection, it just happened naturally when designing the API.
I sacrified raw speed for ease of use and i am glad i did.

But i believe we can do something to avoid reflection when not needed, and without a lot of work.
Basically, getting one of several records is like this:

  • Using reflection on the given structure, extracting all the relevant informations
  • Fetching the selected field and value in the indexes to get the matching IDs
  • Querying BoltDB for those IDs

The reflection boilerplate is essentially done at the first step, it is necessary so we can collect the following informations:

  • The name of the bucket, which is the name of the struct
  • What field is the ID and is it a zero value?
  • What fields are indexed and what kind of index is used for the field

I think that if we can provide a set of methods that allow the users to manually provide these informations, we could achieve excellent performance.

These methods could also be used internally to simplify some parts of the code.

But the most interesting part is that we may be able to transform current struct declarations that use reflection into using the new methods that we talked about above, at compile time, using go generate or whatever.

Provide an easy way to re-index existing data

If adding a new index to an existing struct, any already persisted data would have to be re-indexed by re-saving each item again. It would be nice to have a simple way to do it without each user writing the same loop.

Ideally storm would detect it and just do it automatically, but for that it would probably need to persist some info about indexes on each struct.

Support NextSequence

Nice project!

It would be even nicer if one could define ID uint64 and have it populated by the bucket´s NextSequence for new entities.

Deprecating OpenWithOptions

Using something like this would make the DB creation more consistent:

db := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))

Ascending and Descending

With Bolt you can use a cursor and go from the end -> start/start -> end. Storm seems to always return everything in ascending order. Is there any way to reverse the order All() etc return their records in? This would make stuff like record paging super easy ;D

To clarify, I also mean changing the order while utilizing Limit() and Skip()

Segfault on DB.Save()

We use BoltDB as an index and metadata store for a file cache, used as part of GitBook's hosting system.

One of the edge nodes crashed with a SEGFAULT after a few million requests. I've attached stacktraces below.

It fails with a SEGFAULT in Tx.Commit() -> Bucket.spill() -> Node.write()

Environment

  • Go: 1.7rc2
  • BoltDB: v1.2.1-5-g05e441d - 05e441d7b3ded9164c5b912521504e7711dd0ba2
  • Storm: 97b157d

Screenshot

screen shot 2016-09-19 at 4 53 15 pm

### Pretty stacktrace
1: running [Created by edge.(*Refresher).loop @ .:0]
    runtime    panic.go:566        throw(0xe94a1f, 0x5)
    runtime    sigpanic_unix.go:27 sigpanic()
    bolt       node.go:205         (*node).write(0xc427ed4c40, 0xc42c5ffff0)
    bolt       bucket.go:598       (*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
    bolt       bucket.go:506       (*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
    bolt       bucket.go:508       (*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
    bolt       bucket.go:508       (*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
    bolt       tx.go:163           (*Tx).Commit(0xc42a6dba40, 0, 0)
    bolt       db.go:602           (*DB).Update(0xc4200de3c0, 0xc428b438e0, 0, 0)
    storm      save.go:51          (*Node).Save(0xc4201a3620, 0xcda920, #9, 0x1, #9)
    storm      save.go:113         (*DB).Save(#4, 0xcda920, #9, 0x3, #8)
    macrophage index.go:108        thunderbolt.Set(0x134e4e0, #4, #8, 0x71, 0, 0, 0x60f, #2, #2, 0, ...)
    macrophage macro.go:132        (*Macrophage).MetaSet(#3, #7, 0x1f, #10, 0x51, 0x60f, #2, #2, #6, 0x24, ...)
    cache      write.go:67         CacheWriter.WriteMeta(#7, 0x1f, #10, 0x51, 0x134d760, #3, 0xc42c5ffd80, 0x60f, #1, #1, ...)
    edge       refresher.go:109    (*Refresher).refresh(#5, #7, 0x1f, #10, 0x51, 0x60f, #1, #1, #6, 0x24, ...)
    edge       refresher.go:179    (*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, #5, 0xc420054320)
    runtime    asm_amd64.s:2086    goexit()

Raw stacktrace

unexpected fault address 0xc42c600000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc42c600000 pc=0x877d46]

goroutine 57198547 [running]:
runtime.throw(0xe94a1f, 0x5)
    /usr/local/go/src/runtime/panic.go:566 +0x95 fp=0xc428b42d60 sp=0xc428b42d40
runtime.sigpanic()
    /usr/local/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc428b42db8 sp=0xc428b42d60
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*node).write(0xc427ed4c40, 0xc42c5ffff0)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/node.go:205 +0x86 fp=0xc428b42ef8 sp=0xc428b42db8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:598 +0xb1 fp=0xc428b42f58 sp=0xc428b42ef8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:506 +0x101 fp=0xc428b431c8 sp=0xc428b42f58
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:508 +0x937 fp=0xc428b43438 sp=0xc428b431c8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/bucket.go:508 +0x937 fp=0xc428b436a8 sp=0xc428b43438
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*Tx).Commit(0xc42a6dba40, 0x0, 0x0)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/tx.go:163 +0x125 fp=0xc428b437f8 sp=0xc428b436a8
github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt.(*DB).Update(0xc4200de3c0, 0xc428b438e0, 0x0, 0x0)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/boltdb/bolt/db.go:602 +0x10d fp=0xc428b43848 sp=0xc428b437f8
github.com/GitbookIO/cdn/vendor/github.com/asdine/storm.(*Node).Save(0xc4201a3620, 0xcda920, 0xc42a17a190, 0x1, 0xc42a17a190)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/asdine/storm/save.go:51 +0x271 fp=0xc428b43938 sp=0xc428b43848
github.com/GitbookIO/cdn/vendor/github.com/asdine/storm.(*DB).Save(0xc420375560, 0xcda920, 0xc42a17a190, 0x3, 0xc428a4aa00)
    /go/src/github.com/GitbookIO/cdn/vendor/github.com/asdine/storm/save.go:113 +0x43 fp=0xc428b43970 sp=0xc428b43938
github.com/GitbookIO/cdn/cache/macrophage.thunderbolt.Set(0x134e4e0, 0xc420375560, 0xc428a4aa00, 0x71, 0x0, 0x0, 0x60f, 0x57defcd8, 0x57defcd8, 0x0, ...)
    /go/src/github.com/GitbookIO/cdn/cache/macrophage/index.go:108 +0xaf fp=0xc428b439b0 sp=0xc428b43970
github.com/GitbookIO/cdn/cache/macrophage.(*Macrophage).MetaSet(0xc4201a4940, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57defcd8, 0x57defcd8, 0xc42558b440, 0x24, ...)
    /go/src/github.com/GitbookIO/cdn/cache/macrophage/macro.go:132 +0x15b fp=0xc428b43a80 sp=0xc428b439b0
github.com/GitbookIO/cdn/cache.CacheWriter.WriteMeta(0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x134d760, 0xc4201a4940, 0xc42c5ffd80, 0x60f, 0x57def749, 0x57def749, ...)
    /go/src/github.com/GitbookIO/cdn/cache/write.go:67 +0x17c fp=0xc428b43b20 sp=0xc428b43a80
github.com/GitbookIO/cdn/edge.(*Refresher).refresh(0xc4203815c0, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57def749, 0x57def749, 0xc42558b440, 0x24, ...)
    /go/src/github.com/GitbookIO/cdn/edge/refresher.go:109 +0x7ff fp=0xc428b43ec0 sp=0xc428b43b20
github.com/GitbookIO/cdn/edge.(*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, 0xc4203815c0, 0xc420054320)
    /go/src/github.com/GitbookIO/cdn/edge/refresher.go:179 +0x12d fp=0xc428b43f70 sp=0xc428b43ec0
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc428b43f78 sp=0xc428b43f70
created by github.com/GitbookIO/cdn/edge.(*Refresher).loop
    /go/src/github.com/GitbookIO/cdn/edge/refresher.go:182 +0x2dd

AutoIncrement fails with embedded struct

This fails with an error:

type Base struct {
        ID int `storm:"id"`
    }

    type User struct {
        Base      `storm:"inline"`
        Group     string `storm:"index"`
        Email     string `storm:"unique"`
        Name      string
        Age       int
        CreatedAt time.Time `storm:"index"`
    }

    dir, _ := ioutil.TempDir(os.TempDir(), "storm")
    defer os.RemoveAll(dir)

    // Open takes a list of optional options as last argument.
    // AutoIncrement enables auto-incrementing ids.
    db, err := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

    fmt.Println(err)
    // Output: <nil>

    user := User{
        Group:     "staff",
        Email:     "[email protected]",
        Name:      "John",
        Age:       21,
        CreatedAt: time.Now(),
    }

    err = db.Save(&user)

    fmt.Println(err)
    // Output: <nil>

    fmt.Println(user.ID)
    // Output: 1

Simple Query Engine

There should be a low level engine that executes the queries to BoltDB.
This engine would be able to fetch one or more values based on some options:

  • greater than, greater than and equal
  • less than, less than and equal
  • in, not in
  • skip, limit
  • count only (also for special queries)
  • etc.

This engine would also be used by indexes to fetch indexed values and could also be exported for those who want to make custom queries.

When possible, every call to BoltDB would be made via this engine.

Example with One:

  • db.One
  • Index
  • Engine fetches the id from the index
  • Engine fetches the data

This engine would be able to find records without indexes (see #42)

buckets

hi,
is any way to set bucket for save?
eg i want save &Users{} to users_china and users_russia

AutoIncrement doesn't set the ID field

AutoIncrement doesn't set the ID field of the variable, which is making it absurdly difficult to update with the .Save() function. Couldn't we call AutoIncrement before marshaling the data and save that into the field so I don't end up with a bunch of |Key:124, Value{ID=0}| in the database?

`storm:"index" storm:"unique"` is ignoring the unique constraint

If I have a struct field defined as:

SubURL string storm:"unique"

Then I can safely try to create the same entry in the DB multiple times, knowing that only the first one will success.

But if I define it as:

SubURL string storm:"index" storm:"unique"

Then "unique" is ignored and I get duplicate entries in the DB for the same contents.

Ah, just as I was writing this I tried

SubURL string storm:"unique" storm:"index"

And that worked. So the order of the tags seems to be important. Is this a feature or a bug?

Normalizing encoding

Values are encoded using the codec. But they are encoded differently when saved to an index. The same goes for the IDs.

BoltDB sorts the keys in the bucket, so the indexes used in Storm are naturally sorted. This way, a function like AllByIndex returns everything already sorted out without doing anything.

The problem with the default codec (json) is that it doesn't keep the natural order on some types. If we have three numbers 98, 99, 100, json will encode them as three strings "98", "99", "100" which get sorted in this order 100, 98, 99 (1 comes before 9, like A before B).

That's the main reason toBytes exists, as it encodes using gob.

But not using the selected codec doesn't feel right.

Here are some possible solutions:

  • Using another codec by default, like gob, and remove toBytes
  • Sort values returned by the indexes, which would slow down the reads
  • Keep everything as is
  • ..??

Return id after Save

Hi,

Could we return the AutoIncrementId after call save? Or could you point out how to get the id?

Thanks.

Add write transactions support

It would be nice to do something like:

accountA.Amount -= 100
accountB.Amount += 100
db.Save(accountA)
db.Save(accountB)

As part of a transaction.

Consider caching of modelInfo for each type

From briefly looking at the code, it seems that every time an object is saved, the tags are re-parsed via reflection. It may not be a big deal in the big scheme of things, but it may also help improve performance if meta-data about each struct is cached for the duration of the process.

Also, see issue #50 which would make this issue moot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.