etcd-io / bbolt Goto Github PK
View Code? Open in Web Editor NEWAn embedded key/value database for Go.
Home Page: https://go.etcd.io/bbolt
License: MIT License
An embedded key/value database for Go.
Home Page: https://go.etcd.io/bbolt
License: MIT License
When the database db is opened in read-only mode (options = bolt.Options{ReadOnly: true}),
and the transaction tx is created in read-only mode (tx, ... = db.Begin(false))
Then calls to tx.Check() will systematically crash the process (stack traces at bottom of this message).
It seems the freelist management code is not entirely aware of limitations of the read-only mode - more particularly nil instances.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x10c21c2]
goroutine 18 [running]:
github.com/coreos/bbolt.(*freelist).free_count(...)
xxx/src/github.com/coreos/bbolt/freelist.go:52
github.com/coreos/bbolt.(*freelist).count(0x0, 0x0)
xxx/src/github.com/coreos/bbolt/freelist.go:47 +0x22
github.com/coreos/bbolt.(*Tx).check(0xc4200ca000, 0xc4200ce000)
xxx/src/github.com/coreos/bbolt/tx.go:396 +0xdb
created by github.com/coreos/bbolt.(*Tx).Check
xxx/src/github.com/coreos/bbolt/tx.go:389 +0x67
Hello, I cross compiled the command line tool as stated in the title, and then ran ./bolt check dbname.db
. It resulted in "funlock error: operation not supported". From my inspecting using truss
, it seems that the culprit is: flock(0x3,0x6,0x0,0x0,0x0,0x0) ERR#45 'Operation not supported'
. The same issue happens for an empty database.
DB created with no freelist sync option wont persist freelist down to disk. Opening it with freelist option will fool the db to load in bad freelist, and might cause corruption.
=== RUN TestSimulateNoFreeListSync_1op_1p
consistency check failed (1 errors)
page 2: unreachable unfreed
db saved to:
/tmp/bolt-474295987
FAIL github.com/coreos/bbolt 66.403s
Openig same issue as boltdb/bolt#731, because coreos/bbolt also crashes in same way:
package main
import (
"log"
"math/rand"
"github.com/coreos/bbolt"
// "github.com/boltdb/bolt"
)
var BC = make(chan []byte)
func main() {
go Add(BC, "/opt/test.db")
s := make([]byte, 30)
for i := 0; i < 100000000; i++ {
rand.Read(s)
BC <- s
}
close(BC)
}
func Add(c chan []byte, bfile string) {
const bname = `bc41`
db, err := bolt.Open(bfile, 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
err = db.Update(func(tx *bolt.Tx) error {
_, err := tx.CreateBucketIfNotExists([]byte(bname))
if err != nil {
return err //fmt.Errorf("create bucket: %s", err)
}
return nil
})
if err != nil {
log.Fatal(err)
panic(err)
}
more := true
for more {
err := db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(bname))
for i := 0; i < 100000; i++ {
v := <-c
if v == nil {
more = false
return nil
}
err = b.Put(v, []byte("0"))
if err != nil {
return err
}
}
return nil
})
if err != nil {
log.Fatal(err)
panic(err)
}
}
}
panic: page 3955 already freed
goroutine 5 [running]:
github.com/coreos/bbolt.(*freelist).free(0xc420074210, 0x7, 0x7fbb550ef000)
/home/jambo/golang/src/github.com/coreos/bbolt/freelist.go:143 +0x4ad
github.com/coreos/bbolt.(*node).spill(0xc42163d810, 0xc4209c7800, 0x55f920)
/home/jambo/golang/src/github.com/coreos/bbolt/node.go:363 +0x210
github.com/coreos/bbolt.(*node).spill(0xc42163d7a0, 0x0, 0x0)
/home/jambo/golang/src/github.com/coreos/bbolt/node.go:350 +0xbf
github.com/coreos/bbolt.(*node).spill(0xc42163d490, 0xc4229c36e0, 0xc420045ab0)
/home/jambo/golang/src/github.com/coreos/bbolt/node.go:350 +0xbf
github.com/coreos/bbolt.(*Bucket).spill(0xc420052580, 0xc4229c3600, 0xc420045d28)
/home/jambo/golang/src/github.com/coreos/bbolt/bucket.go:568 +0x4d3
github.com/coreos/bbolt.(*Bucket).spill(0xc4200900f8, 0x2116cdd17, 0x5721a0)
/home/jambo/golang/src/github.com/coreos/bbolt/bucket.go:535 +0x417
github.com/coreos/bbolt.(*Tx).Commit(0xc4200900e0, 0x0, 0x0)
/home/jambo/golang/src/github.com/coreos/bbolt/tx.go:160 +0x129
github.com/coreos/bbolt.(*DB).Update(0xc420088000, 0xc42006bf98, 0x0, 0x0)
/home/jambo/golang/src/github.com/coreos/bbolt/db.go:674 +0xf2
main.Add(0xc420072060, 0x4ea01b, 0x12)
/home/jambo/go/src/jambo/tests/error1/error1.go:50 +0x1ac
created by main.main
/home/jambo/go/src/jambo/tests/error1/error1.go:15 +0x5a
go version
go version go1.9.1 linux/amd64
uname -a
Linux bee 4.9.0-0.bpo.4-amd64 #1 SMP Debian 4.9.51-1~bpo8+1 (2017-10-17) x86_64 GNU/Linux
I have an app that takes regular backups of boltdb databases. Sometimes, for unknown reasons, the backups are corrupted.
I also have a restore UI that lets me browse and read from backups. Trying to open and read from these corrupted databases crashes my process. I'm using 4f5275f
unexpected fault address 0x8a6b008
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8a6b008 pc=0x42e0e2f]
goroutine 12 [running]:
runtime.throw(0x4a487e4, 0x5)
/usr/local/Cellar/go/1.10.2/libexec/src/runtime/panic.go:616 +0x81 fp=0xc4206eee00 sp=0xc4206eede0 pc=0x402d5b1
runtime.sigpanic()
/usr/local/Cellar/go/1.10.2/libexec/src/runtime/signal_unix.go:395 +0x211 fp=0xc4206eee50 sp=0xc4206eee00 pc=0x4042de1
github.com/coreos/bbolt.(*Cursor).search(0xc4206eefe0, 0xc4206ef118, 0x6, 0x20, 0x63)
.go/src/github.com/coreos/bbolt/cursor.go:255 +0x5f fp=0xc4206eef08 sp=0xc4206eee50 pc=0x42e0e2f
github.com/coreos/bbolt.(*Cursor).seek(0xc4206eefe0, 0xc4206ef118, 0x6, 0x20, 0x0, 0x0, 0x4063d84, 0x614e000, 0x0, 0x48d8300, ...)
.go/src/github.com/coreos/bbolt/cursor.go:159 +0xa5 fp=0xc4206eef58 sp=0xc4206eef08 pc=0x42e0725
github.com/coreos/bbolt.(*Bucket).Bucket(0xc4204976d8, 0xc4206ef118, 0x6, 0x20, 0xc4206ef118)
.go/src/github.com/coreos/bbolt/bucket.go:105 +0xde fp=0xc4206ef010 sp=0xc4206eef58 pc=0x42dc66e
github.com/coreos/bbolt.(*Tx).Bucket(0xc4204976c0, 0xc4206ef118, 0x6, 0x20, 0x6)
.go/src/github.com/coreos/bbolt/tx.go:101 +0x4f fp=0xc4206ef048 sp=0xc4206ef010 pc=0x42ebbef
Ubuntu amd-64 go 1.9.2
package main
import (
"fmt"
"log"
"time"
"github.com/coreos/bbolt"
)
var (
testBucket = []byte("test")
)
func initDB(path string) (*bolt.DB, error) {
db, err := bolt.Open(path, 0600, &bolt.Options{Timeout: 1 * time.Second})
if err != nil {
return nil, err
}
err = db.Update(func(tx *bolt.Tx) error {
err := tx.DeleteBucket(testBucket)
_, err = tx.CreateBucketIfNotExists(testBucket)
if err != nil {
return err
}
return nil
})
if err != nil {
db.Close()
return nil, err
}
return db, nil
}
func formatHeight(a int) []byte {
return []byte(fmt.Sprintf("%08d", a))
}
func delBackwards(cb *bolt.Bucket, index int) error {
err := cb.Delete(formatHeight(index))
if err != nil {
return fmt.Errorf("delete failed: %v", err)
}
cc := cb.Cursor()
k, _ := cc.Last()
k2, _ := cc.First()
if k == nil && k2 != nil {
return fmt.Errorf("last and first are inconsistent at %d for %#v %#v", index, k, k2)
}
return nil
}
func test(n int, value string) {
db, err := initDB("bad_bolt.db")
if err != nil {
log.Fatalf("failed to initialize database: %v", err)
}
defer db.Close()
err = db.Update(func(tx *bolt.Tx) error {
cb := tx.Bucket(testBucket)
for i := 0; i < n; i += 1 {
err := cb.Put(formatHeight(i), []byte(value))
if err != nil {
return fmt.Errorf("put failed failed: %v", err)
}
}
return nil
})
if err != nil {
log.Fatalf("first tx failed: %v", err)
}
err = db.Update(func(tx *bolt.Tx) error {
cb := tx.Bucket(testBucket)
for i := n - 1; i >= 0; i -= 1 {
err := delBackwards(cb, i)
if err != nil {
return err
}
}
return nil
})
if err != nil {
log.Printf("second tx failed: %v", err)
} else {
log.Printf("%d %v OK", n, value)
}
}
func main() {
test(100, "0000000000000000")
test(100, "00000000000000000")
}
Its great to see coreos/bbolt take up the boltdb project.
I wonder if you have already added compaction, e.g. to allow values over 1MB and not consume infinite disk space due to fragmentation? Some time back I made a fork that did this, so the code is already available--
https://github.com/bigboltdb/bolt
Bigboltdb is a fork of boltdb that supports database compaction and thus can store big values (over 1MB).
Writing big values causes fragmententation of the backing file, requiring regular compaction to avoid infinite file growth. Bigboltdb provides a Compact() api call to do this, and a mechanism to establish regular, automatic compaction by setting the db.CompactAfterCommitCount field.
Other than this one change, bigboltdb is simple, friendly fork of boltdb.
Hello folks,
Can we please be more kind to Ben Johnson on the front page of the project? I've been playing with BoltDB and every time I pass through this text I feel bad about it. The relationship with the person that created the project and in fact did a great job doing so, gifting us with good code, good license, good APIs, should really be one of gratitude, but the very first paragraph of the page reads as if we're blaming Ben for not actively changing the project anymore. We're all in that same boat, and it just rubs the wrong way.
Here is a seed idea, but feel free to tune obviously:
"This is a continuation of Ben Jonhsons' great work on the design and implementation of the original BoltDB package. As the project stabilized, Ben stopped investing as much time on it and we'd like to continue the development preserving the original principles and APIs but improving on some areas such as performance, reliability, and bug fixes when required."
By the same token, thank you very much for pushing this project forward in those terms and spending your own time on it, openly. I'll most likely be using this on real projects at some point.
Triggered by starting/restarting etcd
panic: page 2 already freed
goroutine 98 [running]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*freelist).free(0xc42022cb70, 0x46, 0x7fe98f92a000)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/freelist.go:143 +0x3c8
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*node).spill(0xc4201d8000, 0xc420048b18, 0x7)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/node.go:363 +0x1e0
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Bucket).spill(0xc42038c398, 0x1acf1e9b, 0x1518800)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/bucket.go:570 +0x17b
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Tx).Commit(0xc42038c380, 0x1acf0d1c, 0x1518800)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/tx.go:163 +0x11f
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTx).commit(0xc42022cc60, 0x0)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:192 +0x82
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTxBuffered).unsafeCommit(0xc42022cc60, 0xfc7900)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:264 +0x49
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTxBuffered).commit(0xc42022cc60, 0xfc4900)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:252 +0x80
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTxBuffered).Commit(0xc42022cc60)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:239 +0x66
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*backend).run(0xc420210ae0)
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:258 +0x13b
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.newBackend
/home/anthony/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:150 +0x344
If I open a database file which is not exist, bolt will return an error and create the file even if Options.ReadOnly == true
. Why bolt still create file under read-only mode?
Here is the code:
// test-open-file.go
package main
import (
"fmt"
"flag"
"os"
bolt "github.com/coreos/bbolt"
)
func check(filename string) (err error) {
db, err := bolt.Open(filename, 0600, &bolt.Options{ReadOnly:true})
if err != nil {
return
}
defer db.Close()
return
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) == 0 {
fmt.Fprintln(os.Stderr, "Please input filename")
os.Exit(1)
}
fmt.Printf("Open \"%s\"\n", args[0])
if err := check(args[0]); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
fmt.Println("ok")
}
Run the code:
go run test-open-file.go a-not-exist-file.db
Under Windows, it will create two files: a-not-exist-file.db and a-not-exist-file.db.lock. The error message is:
CreateFileMapping: Not enough storage is available to process this command.
Under Linux, it will create one file: a-not-exist-file.db. The error message is:
write a-not-exist-file.db: bad file descriptor
The error messages above are not friendly, why not just return "file not exist"?
It would be very helpful if it was possible to use secondary indexes in bolt.
This is a sample implementation. But having it built-in, makes things easier and cleaner.
Is is possible to document the behaviour of for each as mentioned in boltdb/bolt#273?
We use bolt options like etcd
const initialMmapSize = 10 * 1024 * 1024 * 1024 // 10G
db, err := bolt.Open("my.db", 0600, &bolt.Options{
MmapFlags: syscall.MAP_POPULATE,
InitialMmapSize: initialMmapSize,
})
Restart boltdb is very slow.From the system metrics there are lots of disk reads duration db restart. When open
returns, all the db file will be in the page cache. From this, I guess all bytes of the db file may be read into memory.
When the memory cannot hold all the db file and some pages of the file has been swapped out, boltdb's remap will be also very slow.
Is there some methods to speed up boltdb's restart and remap
has a bunch of time.Sleep retry logic; should be closer to etcd's locking
Cleanup file descriptors on failed database initialization
boltdb/bolt#725
Is there a feature to listen for changes in a collection?
Is there community interest for allocation-free reads? This was a suggestion for future work from @benbjohnson.
After zooming into the existing code, it may be feasible.
It could be reached in 2 steps:
1 - Provide an allocation-free Get within an already allocated Tx transaction - i.e., a Tx.Get(keys ...[]byte) with variadic keys argument - note it doesn't rely on any Bucket allocation
That first step gives us allocation-free batched reads
2 - Wrap it further into an allocation-free read outside of any Tx transaction (but still properly read-locking the db, etc...) through a DB.Get(keys ...[)byte)
That second step offers us allocation-free atomic reads
If interest is sufficient, I may contribute some pieces
The documentation says:
if you use a sortable time encoding such as RFC3339...
What other time encodings are acceptable? The documentation also states not to use RFC3339Nano because of the truncation issue, so I created my own fixed-length nano format:
RFC3339_FIXEDNANO = "2006-01-02T15:04:05.0000Z00:00"
.Put([]byte(o.Date.UTC().Format(RFC3339_FIXEDNANO)), buf)
While the insert happens, it is not inserted in timestamp order. Thus, when I fetch using a cursor, the print out (ie: fetch order) is insert-order, not timestamp order.
What restrictions are there for creating a "sortable time encoding" that will work correctly?
package main
import (
"os"
"fmt"
bolt "github.com/coreos/bbolt"
)
func main() {
var err error
err = doWork()
if err != nil {
fmt.Fprintf(os.Stderr, "%s\n", err.Error())
os.Exit(1)
}
os.Exit(0)
}
func pe(err error) string {
if err != nil {
return err.Error()
}
return "nil"
}
func doWork() error {
db, err := bolt.Open("test.db", 0600, nil)
fmt.Fprintf( os.Stderr, "open db: %s\n", pe(err))
defer db.Close()
tx1, err := db.Begin(true)
fmt.Fprintf( os.Stderr, "01: tx 1: start rw: %s\n", pe(err))
tx2, err := db.Begin(true)
fmt.Fprintf( os.Stderr, "02: tx 2: start rw: %s\n", pe(err))
}
[jkayser@oel7latest cryptstore]$ bin/bolttest1
open db: nil
01: tx 1: start rw: nil
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [semacquire]:
sync.runtime_SemacquireMutex(0xc42006618c, 0x48e400)
/usr/local/go1.9.2/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc420066188)
/usr/local/go1.9.2/src/sync/mutex.go:134 +0xee
github.com/coreos/bbolt.(*DB).beginRWTx(0xc420066000, 0x0, 0x0, 0x0)
/home/jkayser/projects/jkayser/experiment/cryptstore/src/github.com/coreos/bbolt/db.go:566 +0x65
github.com/coreos/bbolt.(*DB).Begin(0xc420066000, 0xc42000c001, 0x4e9518, 0x17, 0xc420041ed0)
/home/jkayser/projects/jkayser/experiment/cryptstore/src/github.com/coreos/bbolt/db.go:515 +0x38
main.doWork(0x0, 0x0)
/home/jkayser/projects/jibe/general/cryptstore/src/cmd/bolttest1/bolttest1.go:37 +0x271
main.main()
/home/jkayser/projects/jibe/general/cryptstore/src/cmd/bolttest1/bolttest1.go:14 +0x26
[jkayser@oel7latest cryptstore]$
What's the best practice for doing multiple put operations, say, on a slice of a struct? Is it better to range the slice inside db.Update(), or do multiple db.Update() inside the range?
func saveInfo(infos []myInfoStruct) error {
return db.Update(func(tx *bolt.Tx) error {
for _, o := range infos {
buf, _ := json.Marshal(o)
if err := tx.Put(o.key, buf); err != nil {
return err
}
}
return nil
})
}
or
func saveInfo(infos []myInfoStruct) error {
for _, o := range infos {
buf, _ := json.Marshal(o)
err := db.Update(func(tx *bolt.Tx) error {
if err := tx.Put(o.key, buf); err != nil {
return err
}
return nil
})
if err != nil {
return err // breaks for-loop on any failure
}
}
return nil
}
In re-reading my code above, the 2nd option appears to be less "atomic" if you are considering Put()'ing the entire slice as a single transaction. In the 1st option, if any Put() fails, they will all fail because the Update() would return non-nill. In the 2nd option, several Put-Update's may succeed but the N-th could fail resulting in that element plus all subsequent slice elements to not be added.
Free page reclamation uses the alloc txid to reason about when to free a page. If no alloc txn id is known, it falls back to deallocating based on a tx watermark, freeing when all outstanding txs are closed (e.g., waiting until some laggy snapshot completes).
The fallback code is here:
https://github.com/coreos/bbolt/blob/12923fe56c105bca6efbbcc258cd762b4258333d/freelist.go#L132-L138
Ideally there'd always be a txid associated with the page being freed. Right now the code is doing some allocations that aren't being tracked.
None of the example code in the README shows how to properly check if a bucket exists. All sample code says "assume bucket exists."
Please add at least one code example to demonstrate checking if a bucket exists.
https://github.com/coreos/bbolt/blob/b44cfbde695bad1a19cc09cf00ffb217ce98f038/tx.go#L48-L50
this is a piece of code in tx.go init
method, and init
will be called in both beginTx
and beginRWTx
as db never has two RWTx run concurrently, so actually copy cost can be saving in RWTx
tx.go init
method can it be updated to save this cost ? for example
func (tx *Tx) init(db *DB) {
...
if tx.writable {
tx.meta = db.meta()
} else {
db.meta().copy(tx.meta)
}
...
}
and the comment about Copy the meta page since it can be changed by the writer, makes me thought of there can be more than one writer at the same time ......
OS: MacOS with RHEL VM
The db file got corrupted when the MAC OS decided to restart by itself and my program was running in RHEL VM. Following is the check output.
$ bolt check tmp.db
page 0: multiple references
page 0: invalid type: unknown<00>
panic: invalid page type: 0: 0
goroutine 5 [running]:
panic(0x4e4120, 0xc420010610)
/usr/lib/golang/src/runtime/panic.go:500 +0x1a1
github.com/boltdb/bolt.(*Cursor).search(0xc42003eba8, 0x7f50350f20f0, 0xa, 0xa, 0x1bb69)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/cursor.go:256 +0x429
github.com/boltdb/bolt.(*Cursor).seek(0xc42003eba8, 0x7f50350f20f0, 0xa, 0xa, 0x0, 0x0, 0x4f77a0, 0xc42000a3f0, 0x2, 0x2, ...)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/cursor.go:159 +0xb1
github.com/boltdb/bolt.(*Bucket).Bucket(0xc420078018, 0x7f50350f20f0, 0xa, 0xa, 0x0)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/bucket.go:112 +0x108
github.com/boltdb/bolt.(*Tx).checkBucket.func2(0x7f50350f20f0, 0xa, 0xa, 0x7f50350f20fa, 0x66, 0x66, 0x66, 0x0)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/tx.go:449 +0x70
github.com/boltdb/bolt.(*Bucket).ForEach(0xc420078018, 0xc42003ecc0, 0x0, 0xc42003ecf0)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/bucket.go:390 +0xff
github.com/boltdb/bolt.(*Tx).checkBucket(0xc420078000, 0xc420078018, 0xc42003eea0, 0xc42003eed0, 0xc4200540c0)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/tx.go:453 +0x135
github.com/boltdb/bolt.(*Tx).check(0xc420078000, 0xc4200540c0)
/opt/pindrop/include/go/src/github.com/boltdb/bolt/tx.go:404 +0x5f7
created by github.com/boltdb/bolt.(*Tx).Check
/opt/pindrop/include/go/src/github.com/boltdb/bolt/tx.go:379 +0x67
Is there a way to fix the db file by any means? I check boltdb/bolt#348 and my version (ee30b748bcfbd74ec1d8439ae8fd4f9123a5c94e
) is greater than that .
Note that it didn't happen again when i tried to reproduce again by powering off the virtual machine manually from MAC OS.
As previously requested in this bolt issue, adding a TTL to a key when put in the database would be quite handy. Badger does this with a very minimal api and it would be great to see this added to bbolt.
I see several panic() calls in the code, but absolutely no recover() stuff. As you can not recover another module's code, you're prone to you process dying if 1) you have a faulty database or 2) there are errors in the boltdb/bbolt code somewhere.
We're using boltdb in our software, and it runs on a large number of machines - pretty reliably. But sometimes, bad stuff does happen, and the application crashes in the boltdb code somewhere.
Shouldn't everything be so reliable that it doesn't bring your entire application down with it?
coverage badge on readme is out of date too
Data like compiler, OS, arch, release version, etc. The meta checksumming code will need to be updated some.
I am getting an error using bbolt, where meta has a valid page number, but sometimes, has the sentinal value 0xFFFFFFFFFFFFFFFF. Sample debugging output:
bolt: db: meta: meta0: *bolt.meta,&{3977042669 2 4096 0 {3 0} 4 5 10 8200348871670262819}, meta1: *bolt.meta,&{3977042669 2 4096 0 {3 0} 2 5 11 110609177022752452}, db.meta0.txid: bolt.txid,10, db.meta1.txid: bolt.txid,11
bolt: db: meta: meta0: *bolt.meta,&{3977042669 2 4096 0 {4 0} 18446744073709551615 5 12 15793416151478272846}, meta1: *bolt.meta,&{3977042669 2 4096 0 {3 0} 2 5 11 110609177022752452}, db.meta0.txid: bolt.txid,12, db.meta1.txid: bolt.txid,11
end-case logic needs to be added, in case the freelist pageid is the sentinal value.
The WriteTo is the whole db, is anyone thinking/working on having one that is able to start from an existing backup?
Would be useful so a readonly backup could be used to read and not block updates to another in the case where delayed reads are acceptable.
https://jenkins-etcd-public.prod.coreos.systems/job/etcd-ci-ppc64/1876/console
--- FAIL: TestMetricDbSizeDefrag (0.64s)
metrics_test.go:95: expected less than 327680, got 327680 after defrag
Current implementation ignores whether error handling succeeds or not. If error handling itself fails, maybe record this failure to log. Otherwise, it's hard to detect it.
For example, in DB.Update
function, t.Rollback may fail. https://github.com/coreos/bbolt/blob/master/db.go#L651-L657
// If an error is returned from the function then rollback and return error.
err = fn(t)
t.managed = false
if err != nil {
// Write rollback failure to log?
_ = t.Rollback()
return err
}
Hi,
This is a followup on: #80
I have been testing with closing BoltDB and ran into panic's in view transactions after calling Close()
. After looking at the Close()
code I think I can conclude that it waits for write transactions to finish by locking on db.rwlock
but it does not wait for read transactions.
The only read lock a view transaction keeps open is the db.mmaplock
, but the Close()
function also only requires a read lock on that. So it does not wait for them. Am I correct in this conclusion?
If true, I think the documentation should be amended to reflect that it does not wait for read transactions, or the Close()
function should acquire a full lock on db.mmaplock
so that it also waits for read transactions. That said, I am not familiar enough with BoltDB's code to know what other consequences that might have.
I'm willing to submit a PR if you tell me which option you prefer :)
Origin from https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=832834
Since boltdb/bolt is not active, I think reporting this issue here is more appropriate.
We have the following issue,
--- FAIL: TestDB_Open_InitialMmapSize (5.21s)
db_test.go:416: unexpected that the reader blocks writer
I think it's caused by waiting only 5s for writing 134M data. Maybe waiting more time?
At lease I didn't meet such issue if I change the time to 180s.
And someone on the origin issue complains about the test shouldn't test the hardware...
windows 10 x64
golang 1.9.1 and 1.8.4
bbolt v1.3.1-coreos.2(no problem with v1.3.1-coreos.1)
error when calling bolt.DB.Close():
2017/10/10 11:05:57 bolt.Close(): funlock error: The segment is already unlocked.
#35(Support no timeout locks on db files) lead to this issue
Is https://godoc.org/github.com/boltdb/bolt still the best place for documentation on bbolt, or is there an updated doc somewhere? I know it was mentioned about added features since it was maintained at boltdb/bolt and I dont know if those added features are in those docs?
Current data access API assume all data is in memory
func (b *Bucket) Put(key []byte, value []byte) error
func (b *Bucket) Get(key []byte) []byte
Compare with other embeded db (like sqlite, berkerlydb),
read/write blob( maybe larger than 4GB) is a common demand.
I hope bolt can add APIs to open blob tx.
Seems bbolt perfomance is 20 "Put transaction"s per second (Windows 7,HDD)
https://github.com/maxim-ge/go-inv/blob/master/bbolt/bbolt_test.go
Use go test -v -run Test_bbolt_one_per_tr
Looks as a very low number, is any tuning possible?
PS: SSD is better - 1000 per second
Good morning!
Why this code allocates so much memory?
(Insert 10M records with keys "10000000"
..."1999999"
and empty values → 2G+ allocated RAM, output below in gist.)
I decided to migrate 10M+ rows table from MySQL to BoltDB using UUID for keys. But when I tried, I ran out of memory, because process allocated 26G+ RAM during migration.
Then I found that it's about key length: 4 bytes are fine, but 8 bytes and more cause pain. Maybe, hashing, caching, paging, duplication in memory, I don't really know how it works.
Is this normal behaviour for Bolt (and other similar storages)? Can it be tuned? Or, maybe, it's not the way I should use Bolt?
macOS 10.13.4, go version go1.10.2 darwin/amd64
am i doing something wrong here?
$ go get github.com/coreos/bbolt
$ echo $?
0
$ bolt
bolt: command not found
i go into the $GOPATH/src/github.com/coreos/bbolt
directory, and i can't go build
(does nothing, but returns 0). make
returns an error (make: *** No rule to make target 'build', needed by 'default'. Stop.)
I tried to store a db on a 9p_virtio shared file system but it throw me a 'invalid argument' error.
The db file gets created but seems it don't like something related to the file system.
Running on a regular file system works just fine.
Running on debian 9 a qemu/kvm virtual machine, sharing file system with host trough 9p_virtio module, in 'mapped' mode.
Here is my fstab :
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/vda2 during installation
UUID=f3905f5b-7c6f-4093-b4c5-2f5c31037113 / ext4 errors=remount-ro 0 1
# swap was on /dev/vda1 during installation
UUID=9cd015f1-ce56-41c6-9f41-bc0048f9f053 none swap sw 0 0
# Host share (data-stk-oo)
data-stk-oo /mnt/storkshare 9p rw,sync,dirsync,relatime,trans=virtio,version=9p2000.L 0 0
The only logs I get from go is invalid argument
:
2018/06/11 10:19:09 Info: Starting the server ...
2018/06/11 10:19:09 Info: Checking data folders
2018/06/11 10:19:09 Info: Loading database
2018/06/11 10:19:09 Info: No database found, creating new one
2018/06/11 10:19:09 Fatal: invalid argument
Could we cut release (maybe v1.3.x-coreos
)?
I would like to run functional-tests with latest bbolt
.
This is not a blind fork. etcd relies on boltdb pretty heavily, and cares a lot about its reliability and stability. The upstream bolt works well enough for a lot of use cases and is not actively being developed or accepting PRs. Thus, we fork it for a faster development cycle and maintaining it towards the goal of better reliability and stability, at least for etcd use case.
I tried taking a backup of a DB file, and I made it permissions 0400 so that it could not be accidentally modified later. I wanted to run "bolt check" on it to be certain it was a good backup. I couldn't because of the permissions. I changed it to u+w and then check worked.
https://github.com/coreos/bbolt/blob/af9db2027c98c61ecd8e17caa5bd265792b9b9a2/cmd/bolt/main.go#L202
The check command should give Open a bbolt.Options with ReadOnly == true, then checking read-only files would work.
Does it possible to limit maximum db size?
I created bolt databases under Windows/Linux/Mac and insert 100000 and 1000 records into the database. Then I got the result below:
file | records | created under | file size | md5sum |
---|---|---|---|---|
win_1000.db | 1000 | Windows | 262144 | 05d62c9653360e15d35625759947e8ea |
linux_1000.db | 1000 | Linux | 262144 | 05d62c9653360e15d35625759947e8ea |
mac_1000.db | 1000 | Mac | 262144 | 05d62c9653360e15d35625759947e8ea |
win_100000.db | 100000 | Windows | 33554432 | aa11783743dc42f44778ffbcbc4b0ba9 |
linux_100000.db | 100000 | Linux | 34795520 | b7cc6445179f70911d27f682903632d9 |
mac_100000.db | 100000 | Mac | 34795520 | b7cc6445179f70911d27f682903632d9 |
Things I found:
win_100000.db
above.CreateFileMapping: Not enough storage is available to process this command.
Could the 2nd and 4th point above be fixed?
Below is the code for test:
// file: test-bolt.go
package main
import (
"crypto/sha512"
"flag"
"fmt"
"io"
"os"
"github.com/coreos/bbolt"
)
func fileExist(filename string) bool {
_, err := os.Stat(filename)
if err == nil {
return true
}
if os.IsNotExist(err) {
return false
}
return true
}
func genData(n int) []byte {
h := sha512.New()
io.WriteString(h, fmt.Sprintf("%d", n))
return h.Sum(nil)
}
func create(filename string, n int) (err error) {
if fileExist(filename) {
err = fmt.Errorf("%s already exist, please use another file name", filename)
return
}
db, err := bolt.Open(filename, 0600, nil)
if err != nil {
return
}
defer db.Close()
fmt.Fprintf(os.Stderr, "add %d records to %s:\n", n, filename)
err = db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucket([]byte("data"))
if err != nil {
return err
}
var i int
for i = 1; i <= n; i++ {
err = b.Put([]byte(fmt.Sprintf("%d", i)), genData(i))
if err != nil {
return err
}
if i % 2000 == 0 {
fmt.Printf("%d records added\n", i)
}
}
i--
if i % 2000 != 0 {
fmt.Printf("%d records added\n", i)
}
return nil
})
if err == nil {
fmt.Println("done")
}
return
}
func check(filename string) (err error) {
if !fileExist(filename) {
err = fmt.Errorf("%s not exist", filename)
return
}
db, err := bolt.Open(filename, 0600, &bolt.Options{ReadOnly:true})
if err != nil {
return
}
defer db.Close()
err = db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("data"))
if b == nil {
return fmt.Errorf(`bucket "data" not found`)
}
fmt.Printf("%s has %d records.\n", filename, b.Stats().KeyN)
return nil
})
return
}
const usage = `test-bolt
Usage:
test-bolt -m create -n <integer> <filename>
test-bolt -m check <filename>
Options:
-m <create|check> create or check bolt database
-n <integer> number of records to insert into the database
-h help message`
func main() {
var method, filename string
var n int
var help bool
flag.StringVar(&method, "m", "", "create or check bolt database")
flag.IntVar(&n, "n", 0, "number of records to insert into the database")
flag.BoolVar(&help, "h", false, "help message")
flag.Parse()
if help {
fmt.Println(usage)
os.Exit(0)
}
if method != "create" && method != "check" {
fmt.Fprintln(os.Stderr, `value of -m must be "create" or "check"`)
os.Exit(1)
}
if method == "create" && n <= 0 {
fmt.Fprintln(os.Stderr, "value of -n must > 0")
os.Exit(1)
}
args := flag.Args()
if len(args) == 0 {
fmt.Fprintln(os.Stderr, "please input database filename")
os.Exit(1)
}
filename = args[0]
if method == "create" {
err := create(filename, n)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
} else {
err := check(filename)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
}
Test command:
Create database with 1000 records under Windows:
go run test-bolt.go -m create -n 1000 win_1000.db
Create database with 100000 records under Windows:
go run test-bolt.go -m create -n 100000 win_100000.db
Create database with 1000 records under Linux:
go run test-bolt.go -m create -n 1000 linux_1000.db
Create database with 100000 records under Linux:
go run test-bolt.go -m create -n 100000 linux_100000.db
Create database with 1000 records under Mac:
go run test-bolt.go -m create -n 1000 mac_1000.db
Create database with 100000 records under Mac:
go run test-bolt.go -m create -n 100000 mac_100000.db
Check database under Windows, Linux and Mac:
go run test-bolt.go -m check win_1000.db
go run test-bolt.go -m check win_100000.db
go run test-bolt.go -m check linux_1000.db
go run test-bolt.go -m check linux_100000.db
go run test-bolt.go -m check mac_1000.db
go run test-bolt.go -m check mac_100000.db
Bolt is backed by an mmap'd file, but there is no documentation regarding the implications of this for disk read/write errors.
If the user's disk is faulty, standard IO operations can return an error, but when using mmap
, there are no function calls to return errors. Instead, the OS sends a signal (SIGSEGV or SIGBUS) to the process. If the signal is not caught, the program crashes. Users of https://github.com/NebulousLabs/Sia have reported this behavior.
Bolt should document this drawback. Specifically:
The fork has been announced on Reddit.
Are you at a point where you can put any meat on the bones for where bbolt is heading ?
Am excited about this as I use boltdb for almost all projects
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.