Giter Club home page Giter Club logo

dat's Introduction

dat

GoDoc

dat (Data Access Toolkit) is a fast, lightweight Postgres library for Go.

  • Focused on Postgres. See Insect, Upsert, SelectDoc, QueryJSON

  • Built on a solid foundation sqlx

    // child DB is *sqlx.DB
    DB.DB.Queryx(`SELECT * FROM users`)
  • SQL and backtick friendly

    DB.SQL(`SELECT * FROM people LIMIT 10`).QueryStructs(&people)
  • JSON Document retrieval (single trip to Postgres, requires Postgres 9.3+)

    DB.SelectDoc("id", "user_name", "avatar").
        Many("recent_comments", `SELECT id, title FROM comments WHERE id = users.id LIMIT 10`).
        Many("recent_posts", `SELECT id, title FROM posts WHERE author_id = users.id LIMIT 10`).
        One("account", `SELECT balance FROM accounts WHERE user_id = users.id`).
        From("users").
        Where("id = $1", 4).
        QueryStruct(&obj) // obj must be agreeable with json.Unmarshal()

    results in

    {
        "id": 4,
        "user_name": "mario",
        "avatar": "https://imgur.com/a23x.jpg",
        "recent_comments": [{"id": 1, "title": "..."}],
        "recent_posts": [{"id": 1, "title": "..."}],
        "account": {
            "balance": 42.00
        }
    }
  • JSON marshalable bytes (requires Postgres 9.3+)

    var b []byte
    b, _ = DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,
        "mario",
    ).QueryJSON()
    
    // straight into map
    var obj map[string]interface{}
    DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,
        "mario",
    ).QueryObject(&obj)
  • Ordinal placeholders

    DB.SQL(`SELECT * FROM people WHERE state = $1`, "CA").Exec()
  • SQL-like API

    err := DB.
        Select("id, user_name").
        From("users").
        Where("id = $1", id).
        QueryStruct(&user)
  • Redis caching

    // cache result for 30 seconds
    key := "user:" + strconv.Itoa(user.id)
    err := DB.
        Select("id, user_name").
        From("users").
        Where("id = $1", user.id).
        Cache(key, 30 * time.Second, false).
        QueryStruct(&user)
  • Nested transactions

  • Per query timeout with database cancellation logic pg_cancel_backend

  • SQL and slow query logging

  • Performant

    • ordinal placeholder logic is optimized to be nearly as fast as using ?
    • dat can interpolate queries locally resulting in performance increase over plain database/sql and sqlx. Benchmarks

Getting Started

Get it

dat.v1 uses glide package dependency manager. Earlier builds relied on gopkg.in which at the time was as good a solution as any. Will move to dep once it is stable.

glide get gopkg.in/mgutz/dat.v1/sqlx-runner

Use it

import (
    "database/sql"

    _ "github.com/lib/pq"
    "gopkg.in/mgutz/dat.v1"
    "gopkg.in/mgutz/dat.v1/sqlx-runner"
)

// global database (pooling provided by SQL driver)
var DB *runner.DB

func init() {
    // create a normal database connection through database/sql
    db, err := sql.Open("postgres", "dbname=dat_test user=dat password=!test host=localhost sslmode=disable")
    if err != nil {
        panic(err)
    }

    // ensures the database can be pinged with an exponential backoff (15 min)
    runner.MustPing(db)

    // set to reasonable values for production
    db.SetMaxIdleConns(4)
    db.SetMaxOpenConns(16)

    // set this to enable interpolation
    dat.EnableInterpolation = true

    // set to check things like sessions closing.
    // Should be disabled in production/release builds.
    dat.Strict = false

    // Log any query over 10ms as warnings. (optional)
    runner.LogQueriesThreshold = 10 * time.Millisecond

    DB = runner.NewDB(db, "postgres")
}

type Post struct {
    ID        int64         `db:"id"`
    Title     string        `db:"title"`
    Body      string        `db:"body"`
    UserID    int64         `db:"user_id"`
    State     string        `db:"state"`
    UpdatedAt dat.NullTime  `db:"updated_at"`
    CreatedAt dat.NullTime  `db:"created_at"`
}

func main() {
    var post Post
    err := DB.
        Select("id, title").
        From("posts").
        Where("id = $1", 13).
        QueryStruct(&post)
    fmt.Println("Title", post.Title)
}

Feature highlights

Use Builders or SQL

Query Builder

var posts []*Post
err := DB.
    Select("title", "body").
    From("posts").
    Where("created_at > $1", someTime).
    OrderBy("id ASC").
    Limit(10).
    QueryStructs(&posts)

Plain SQL

err = DB.SQL(`
    SELECT title, body
    FROM posts WHERE created_at > $1
    ORDER BY id ASC LIMIT 10`,
    someTime,
).QueryStructs(&posts)

Note: dat does not trim the SQL string, thus any extra whitespace is transmitted to the database.

In practice, SQL is easier to write with backticks. Indeed, the reason this library exists is most SQL builders introduce a DSL to insulate the user from SQL.

Query builders shine when dealing with data transfer objects, structs.

Fetch Data Simply

Query then scan result to struct(s)

var post Post
err := DB.
    Select("id, title, body").
    From("posts").
    Where("id = $1", id).
    QueryStruct(&post)

var posts []*Post
err = DB.
    Select("id, title, body").
    From("posts").
    Where("id > $1", 100).
    QueryStructs(&posts)

Query scalar values or a slice of values

var n int64
DB.SQL("SELECT count(*) FROM posts WHERE title=$1", title).QueryScalar(&n)

var ids []int64
DB.SQL("SELECT id FROM posts", title).QuerySlice(&ids)

Field Mapping

dat DOES NOT map fields automatically like sqlx. You must explicitly set db struct tags in your types.

Embedded fields are mapped breadth-first.

type Realm struct {
    RealmUUID string `db:"realm_uuid"`
}
type Group struct {
    GroupUUID string `db:"group_uuid"`
    *Realm
}

g := &Group{Realm: &Realm{"11"}, GroupUUID: "22"}

sql, args := InsertInto("groups").Columns("group_uuid", "realm_uuid").Record(g).ToSQL()
expected := `
    INSERT INTO groups ("group_uuid", "realm_uuid")
    VALUES ($1, $2)
	`

Blacklist and Whitelist

Control which columns get inserted or updated when processing external data

// userData came in from http.Handler, prevent them from setting protected fields
DB.InsertInto("payments").
    Blacklist("id", "updated_at", "created_at").
    Record(userData).
    Returning("id").
    QueryScalar(&userData.ID)

// ensure session user can only update his information
DB.Update("users").
    SetWhitelist(user, "user_name", "avatar", "quote").
    Where("id = $1", session.UserID).
    Exec()

IN queries

applicable when dat.EnableInterpolation == true

Simpler IN queries which expand correctly

ids := []int64{10,20,30,40,50}
b := DB.SQL("SELECT * FROM posts WHERE id IN $1", ids)
b.MustInterpolate() == "SELECT * FROM posts WHERE id IN (10,20,30,40,50)"

Tracing SQL

dat uses logxi for logging. By default, logxi logs all warnings and errors to the console. dat logs the SQL and its arguments on any error. In addition, dat logs slow queries as warnings if runner.LogQueriesThreshold > 0

To trace all SQL, set environment variable

LOGXI=dat* yourapp

CRUD

Create

Use Returning and QueryStruct to insert and update struct fields in one trip

var post Post

err := DB.
    InsertInto("posts").
    Columns("title", "state").
    Values("My Post", "draft").
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Use Blacklist and Whitelist to control which record (input struct) fields are inserted.

post := Post{Title: "Go is awesome", State: "open"}
err := DB.
    InsertInto("posts").
    Blacklist("id", "user_id", "created_at", "updated_at").
    Record(&post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

// use wildcard to include all columns
err := DB.
    InsertInto("posts").
    Whitelist("*").
    Record(&post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Insert Multiple Records

// create builder
b := DB.InsertInto("posts").Columns("title")

// add some new posts
for i := 0; i < 3; i++ {
    b.Record(&Post{Title: fmt.Sprintf("Article %s", i)})
}

// OR (this is more efficient as it does not do any reflection)
for i := 0; i < 3; i++ {
    b.Values(fmt.Sprintf("Article %s", i))
}

// execute statement
_, err := b.Exec()

Inserts if not exists or select in one-trip to database

sql, args := DB.
    Insect("tab").
    Columns("b", "c").
    Values(1, 2).
    Where("d = $1", 3).
    Returning("id", "f", "g").
    ToSQL()

sql == `
WITH
    sel AS (SELECT id, f, g FROM tab WHERE (d = $1)),
    ins AS (
        INSERT INTO "tab"("b","c")
        SELECT $2,$3
        WHERE NOT EXISTS (SELECT 1 FROM sel)
        RETURNING "id","f","g"
    )
SELECT * FROM ins UNION ALL SELECT * FROM sel
`

Read

var other Post

err = DB.
    Select("id, title").
    From("posts").
    Where("id = $1", post.ID).
    QueryStruct(&other)

published := `
    WHERE user_id = $1
        AND state = 'published'
`

var posts []*Post
err = DB.
    Select("id, title").
    From("posts").
    Scope(published, 100).
    QueryStructs(&posts)

Update

Use Returning to fetch columns updated by triggers. For example, an update trigger on "updated_at" column

err = DB.
    Update("posts").
    Set("title", "My New Title").
    Set("body", "markdown text here").
    Where("id = $1", post.ID).
    Returning("updated_at").
    QueryScalar(&post.UpdatedAt)

Upsert - Update or Insert

sql, args := DB.
    Upsert("tab").
    Columns("b", "c").
    Values(1, 2).
    Where("d=$1", 4).
    Returning("f", "g").
    ToSQL()

expected := `
WITH
    upd AS (
        UPDATE tab
        SET "b" = $1, "c" = $2
        WHERE (d=$3)
        RETURNING "f","g"
    ), ins AS (
        INSERT INTO "tab"("b","c")
        SELECT $1,$2
        WHERE NOT EXISTS (SELECT 1 FROM upd)
        RETURNING "f","g"
    )
SELECT * FROM ins UNION ALL SELECT * FROM upd
`

applicable when dat.EnableInterpolation == true

To reset columns to their default DDL value, use DEFAULT. For example, to reset payment\_type

res, err := DB.
    Update("payments").
    Set("payment_type", dat.DEFAULT).
    Where("id = $1", 1).
    Exec()

Use SetBlacklist and SetWhitelist to control which fields are updated.

// create blacklists for each of your structs
blacklist := []string{"id", "created_at"}
p := paymentStructFromHandler

err := DB.
    Update("payments").
    SetBlacklist(p, blacklist...)
    Where("id = $1", p.ID).
    Exec()

Use a map of attributes

attrsMap := map[string]interface{}{"name": "Gopher", "language": "Go"}
result, err := DB.
    Update("developers").
    SetMap(attrsMap).
    Where("language = $1", "Ruby").
    Exec()

Delete

result, err = DB.
    DeleteFrom("posts").
    Where("id = $1", otherPost.ID).
    Exec()

Joins

Define JOINs in argument to From

err = DB.
    Select("u.*, p.*").
    From(`
        users u
        INNER JOIN posts p on (p.author_id = u.id)
    `).
    WHERE("p.state = 'published'").
    QueryStructs(&liveAuthors)

Scopes

Scopes predefine JOIN and WHERE conditions. Scopes may be used with DeleteFrom, Select and Update.

As an example, a "published" scope might define published posts by user.

publishedPosts := `
    INNER JOIN users u on (p.author_id = u.id)
    WHERE
        p.state == 'published' AND
        p.deleted_at IS NULL AND
        u.user_name = $1
`

unpublishedPosts := `
    INNER JOIN users u on (p.author_id = u.id)
    WHERE
        p.state != 'published' AND
        p.deleted_at IS NULL AND
        u.user_name = $1
`

err = DB.
    Select("p.*").                      // must qualify columns
    From("posts p").
    Scope(publishedPosts, "mgutz").
    QueryStructs(&posts)

Creating Connections

All queries are made in the context of a connection which is acquired from the underlying SQL driver's pool

For one-off operations, use DB directly

err := DB.SQL(sql).QueryStruct(&post)

For multiple operations, create a Tx transaction. defer Tx.AutoCommit() or defer Tx.AutoRollback() MUST be called

func PostsIndex(rw http.ResponseWriter, r *http.Request) {
    tx, _ := DB.Begin()
    defer tx.AutoRollback()

    // Do queries with the session
    var post Post
    err := tx.Select("id, title").
        From("posts").
        Where("id = $1", post.ID).
        QueryStruct(&post)
    )
    if err != nil {
        // `defer AutoRollback()` is used, no need to rollback on error
        r.WriteHeader(500)
        return
    }

    // do more queries with transaction ...

    // MUST commit or AutoRollback() will rollback
    tx.Commit()
}

DB and Tx implement runner.Connection interface to keep code DRY

func getUsers(conn runner.Connection) ([]*dto.Users, error) {
    sql := `
        SELECT *
        FROM users
    `
    var users []*dto.Users
    err := conn.SQL(sql).QueryStructs(&users)
    if err != nil {
        return err
    }
    return users
}

Nested Transactions

Nested transaction logic is as follows:

  • If Commit is called in a nested transaction, the operation results in no operation (NOOP). Only the top level Commit commits the transaction to the database.

  • If Rollback is called in a nested transaction, then the entire transaction is rolled back. Tx.IsRollbacked is set to true.

  • Either defer Tx.AutoCommit() or defer Tx.AutoRollback() MUST BE CALLED for each corresponding Begin. The internal state of nested transactions is tracked in these two methods.

func nested(conn runner.Connection) error {
    tx, err := conn.Begin()
    if err != nil {
        return err
    }
    defer tx.AutoRollback()

    _, err := tx.SQL(`INSERT INTO users (email) values $1`, "[email protected]").Exec()
    if err != nil {
        return err
    }
    // prevents AutoRollback
    tx.Commit()
}

func top() {
    tx, err := DB.Begin()
    if err != nil {
        logger.Fatal("Could not create transaction")
    }
    defer tx.AutoRollback()

    err := nested(tx)
    if err != nil {
        return
    }
    // top level commits the transaction
    tx.Commit()
}

Timeouts

A timeout may be set on any Query* or Exec with the Timeout method. When a timeout is set, the query is run in a separate goroutine and should a timeout occur dat will cancel the query via Postgres' pg_cancel_backend.

err := DB.Select("SELECT pg_sleep(1)").Timeout(1 * time.Millisecond).Exec()
err == dat.ErrTimedout

Dates

Use dat.NullTime type to properly handle nullable dates from JSON and Postgres.

Constants

applicable when dat.EnableInterpolation == true

dat provides often used constants in SQL statements

  • dat.DEFAULT - inserts DEFAULT
  • dat.NOW - inserts NOW()

Defining Constants

UnsafeStrings and constants will panic unless dat.EnableInterpolation == true

To define SQL constants, use UnsafeString

const CURRENT_TIMESTAMP = dat.UnsafeString("NOW()")
DB.SQL("UPDATE table SET updated_at = $1", CURRENT_TIMESTAMP)

UnsafeString is exactly that, UNSAFE. If you must use it, create a constant and NEVER use UnsafeString directly as an argument like this

DB.SQL("UPDATE table SET updated_at = $1", dat.UnsafeString(someVar))

Primitive Values

Load scalar and slice values.

var id int64
var userID string
err := DB.
    Select("id", "user_id").From("posts").Limit(1).QueryScalar(&id, &userID)

var ids []int64
err = DB.Select("id").From("posts").QuerySlice(&ids)

Caching

dat implements caching backed by an in-memory or Redis store. The in-memory store is not recommended for production use. Caching can cache any struct or primitive type that can be marshaled/unmarshaled cleanly with the json package due to Redis being a string value store.

Time is especially problematic as JavaScript, Postgres and Go have different time formats. Use the type dat.NullTime if you are getting cannot parse time errors.

Caching is performed before the database driver lessening the workload on the database.

// key-value store (kvs) package
import "gopkg.in/mgutz/dat.v1/kvs"

func init() {
    // Redis: namespace is the prefix for keys and should be unique
    store, err := kvs.NewRedisStore("namespace:", ":6379", "passwordOrEmpty")

    // Or, in-memory store provided by [go-cache](https://github.com/pmylund/go-cache)
    cleanupInterval := 30 * time.Second
    store = kvs.NewMemoryStore(cleanupInterval)

    runner.SetCache(store)
}

// Cache states query for a year using key "namespace:states"
b, err := DB.
    SQL(`SELECT * FROM states`).
    Cache("states", 365 * 24 * time.Hour, false).
    QueryJSON()

// Without a key, the checksum of the query is used as the cache key.
// In this example, the interpolated SQL  will contain their user_name
// (if EnableInterpolation is true) effectively caching each user.
//
// cacheID == checksum("SELECT * FROM users WHERE user_name='mario'")
b, err := DB.
    SQL(`SELECT * FROM users WHERE user_name = $1`, user).
    Cache("", 365 * 24 *  time.Hour, false).
    QueryJSON()

// Prefer using known unique IDs to avoid the computation cost
// of the checksum key.
key = "user" + user.UserName
b, err := DB.
    SQL(`SELECT * FROM users WHERE user_name = $1`, user).
    Cache(key, 15 * time.Minute, false).
    QueryJSON()

// Set invalidate to true to force setting the key
statesUpdated := true
b, err := DB.
    SQL(`SELECT * FROM states`).
    Cache("states", 365 * 24 *  time.Hour, statesUpdated).
    QueryJSON()

// Clears the entire cache
runner.Cache.FlushDB()

runner.Cache.Del("fookey")

SQL Interpolation

Interpolation is DISABLED by default. Set dat.EnableInterpolation = true to enable.

dat can interpolate locally to inline query arguments. For example, this statement

go

db.Exec(
    "INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)",
    []interface{}[1, 2, 3, 4],
)

is sent to the database with inlined args bypassing prepared statement logic in the lib/pq layer

"INSERT INTO (a, b, c, d) VALUES (1, 2, 3, 4)"

Interpolation provides these benefits:

  • Performance improvements
  • Debugging/tracing is simpler with interpolated SQL
  • May use safe SQL constants like dat.NOW and dat.DEFAULT
  • Expand placeholders with slice values $1 => (1, 2, 3)

Read SQL Interpolation in wiki for more details and SQL injection.

LICENSE

The MIT License (MIT)

dat's People

Contributors

georgeerickson avatar justintulloss avatar lgonzalez-silen avatar nathanleclaire avatar rafax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dat's Issues

Something change recently with dat.v1?

I'm getting the following error when pulling v1. Any suggestions welcome and solicited.

$ go get -u gopkg.in/mgutz/dat.v1
package gopkg.in/mgutz/dat.v1: no buildable Go source files in /Users/jmick200/Dropbox/JMickJrRoot/MyDev/dropbox_cc/repo_yggdrasil/src/gopkg.in/mgutz/dat.v1

Panic in HTTP handler?

When I include code like the following in an HTTP HandlerFunc (or its ilk), and an error occurs in the database layer (e.g. "pq: duplicate key value violates unique constraint "feeds_pkey""), the DB session doesn't close and dat panics after around 60s (killing the application):

    tx, err := svc.DB.Begin()
    if err != nil {
        http.Error(out, err.Error(), http.StatusInternalServerError)
        return
    }
    defer tx.AutoRollback()

        // "feed" was unmarshaled from a JSON document...
    if err := feed.Insert(tx); err != nil {
        http.Error(out, err.Error(), http.StatusBadRequest)
        return
    }

    if err := tx.Commit(); err != nil {
        http.Error(out, err.Error(), http.StatusInternalServerError)
        return
    }

        out.WriteHeader(http.StatusNoContent)

But if I move the transaction inside the functions, everything is fine when the error occurs (this doesn't panic):

        // "DB" is a *dat.DB
    if err := feed.Insert(DB); err != nil {
        http.Error(out, err.Error(), http.StatusBadRequest)
        return
    }

        out.WriteHeader(http.StatusNoContent)

Why isn't the "defer tx.AutoRollback()" firing and releasing the session when it's in an HTTP handler?

Either remove or expose logxi

Right now logxi is embedded inside dat and is not accessible or configurable, except through the command line. For applications that use alternate logging libraries or aren't 12-factor apps, this makes using dat problematic and often cumbersome. For example, I am apparently supposed to be able to redirect STDOUT to a log file if needs be, but nothing gets written when I do that. I fear if I can get it to work, the file will then fill with ANSI escape characters, making the log file unreadable. Furthermore, we would like to configure logging ourselves and use remote syslog, but this is impossible with dat, as logxi does not support this.

It would be far better if dat simply returned error messages, instead of performing its own logging.

testify issue

When I run godo test I get the following:

22:10 $ go test
--- FAIL: TestInsertRecordsToSql (0.00s)
        Error Trace:    insert_test.go:63
    Error:      Not equal: []interface {}{1, 88, false, 2, 99, true} (expected)
                    != []interface {}{1, 88, false, 2, 99, true} (actual)

--- FAIL: TestInsertWhitelist (0.00s)
        Error Trace:    insert_test.go:74
    Error:      Not equal: []interface {}{1, 88, false, 2, 99, true} (expected)
                    != []interface {}{1, 88, false, 2, 99, true} (actual)

--- FAIL: TestInsertBlacklist (0.00s)
        Error Trace:    insert_test.go:89
    Error:      Not equal: []interface {}{88, false, 99, true} (expected)
                    != []interface {}{88, false, 99, true} (actual)

--- FAIL: TestUpdateWhitelist (0.00s)
        Error Trace:    update_test.go:79
    Error:      Not equal: []interface {}{2, false} (expected)
                    != []interface {}{2, false} (actual)

--- FAIL: TestUpdateBlacklist (0.00s)
        Error Trace:    update_test.go:90
    Error:      Not equal: []interface {}{2, false} (expected)
                    != []interface {}{2, false} (actual)

FAIL
exit status 1
FAIL    github.com/mgutz/dat    0.011s

I vaguely remember having an issue with this a while back when testify added assert.EqualValues, what version of github.com/stretchr/testify are you using?

no buildable Go source files (logxi)

I'm using glide as recommended, but the issue is still actual

vendor/gopkg.in/mgutz/dat.v1/init.go:7:2: no buildable Go source files in /Users/green/golang/src/github.com/mgutz/logxi

select exists

Would nice to be able to do a SELECT EXISTS query without using string formatting

var trueFalse bool
sql := fmt.Sprintf("SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = %v)", userId)
err := db.SQL(sql).
QueryScalar(&trueFalse)

Embedded structs not works for insert

type Realm struct  {
    RealmUUID types.UUID `db:"realm_uuid"`
}
type Group struct {
    GroupUUID types.UUID `db:"group_uuid"`
    *Realm
}
...
_, err = conn.InsertInto("groups").Columns("group_uuid", "realm_uuid").Record(g).Exec()

---
panic: could not map db column "realm_uuid" to struct field (use struct tags) [recovered]
    panic: could not map db column "realm_uuid" to struct field (use struct tags)

...
_, err = conn.InsertInto("groups").Whitelist("*").Record(g).Exec()

---
2015/05/07 05:28:53 <model.Group Value> must have db struct tags for all fields: `db:""`

Where I'm wrong?

Update().SetBlacklist() doesn't understand embedded structs

When using Update(...).SetBlacklist(...), the reflection methods fail to traverse into embedded structs with the error message:

<xxx Value> must have db struct tags for all fields: db:""

It seems there is a use of reflectx throughout the dat package which understands struct traversal, but the update methods use their own reflection.

For example:

type Model struct {
    ID        string    `json:"id" db:"id"`
    CreatedAt time.Time `json:"createdAt" db:"created_at"`
    UpdatedAt time.Time `json:"updatedAt" db:"updated_at"`
}

type Customer struct {
    Model
    First              string         `json:"first" db:"first"`
    Last               string         `json:"last" db:"last"`
}

customer := Customer{}
err := tx.
    Update(TableCustomers).
    SetBlacklist(customer, "id", "created_at", "updated_at").
    Where("id = $1", customer.ID).
    Returning("updated_at").
    QueryScalar(&customer.UpdatedAt)

Not only that, but it fails with a panic which takes everything down with it.

Querying nested JSON with SelectDoc

First of all, thank you for this great library.

But I have encountered one problem. Dat cannot populate nested json with single trip to Postgres.

I have some nested structs (simplified):

type Item struct {
    ID          int             `db:"id" json:"id"`
    Name        string          `db:"name" json:"name"`
    CatalogPart ItemCatalogPart `json:"catalog_part"`
}

type ItemCatalogPart struct {
    Description string      `db:"description" json:"description"`
    Fields      []ItemField     `json:"fields"`
}

type ItemField struct {
    ID           int    `db:"id" json:"id"`
    Value        string `db:"value" json:"value"`
}

And I trying to query Item with SelectDoc:

func (repo *DBItemRepo) FindByID(id int) (item domain.Item) {
    repo.dbHandler.
        SelectDoc("id", "name").
        One("catalog_part", "SELECT description FROM items AS i WHERE i.id = items.id").
        Many("fields", "SELECT id, value FROM item_fields WHERE item_id = items.id ORDER BY id").
        From("items").
        Where("id = $1", id).
        QueryStruct(&item)
    return
}

Of course I get nil. How to do it with one trip to DB?
Thank you.

Add "ForUpdate" to SelectBuilder?

It would be nice to have a ForUpdate function in the SelectBuilder for working with transactions and row-level locking.

Or is there another approach I'm missing?

IN queries and MustInterpolate

I'm using v2 version of dat and MustInterpolate is mentioned only in documentation, I can't find it anywhere in the code. And without it queries end up with $brj$ spliced in.

What's correct way of doing IN queries and what project branch is most actively developed at the moment?

Serializing custom types

I have a struct like this

type JSON map[string]interface{}
type Data struct {
    ID   string `db:"id" json:"id,omitempty"`
    meta JSON   `db:"meta" json:"meta,omitempty"`
}

But When I try to insert it it gives this error sql: converting Exec argument #7's type: unsupported type common.JSONO, a map
Is there some way to overcome this because I want this meta to be serialized to a string and then stored?

SelectDoc and QueryJSON

SelectDoc combined with QueryJSON includes row_to_json in each object

Example

Given the following schema

-- SQL Setup
CREATE TABLE product_prototypes (
  id bigserial PRIMARY KEY,
  name text NOT NULL
);

INSERT INTO product_prototypes (name) VALUES ('Beer'), ('Wine'), ('Liquor'), ('Extras');

Running this code.

package main

import "gopkg.in/mgutz/dat.v1/sqlx-runner"

func main() {
    db := runner.NewDBFromString("postgres", "postgres://localhost/dat_test?sslmode=disable")

    b, err := db.SelectDoc("id, name").From("product_prototypes").QueryJSON()
    if err != nil {
        panic(err)
    }

    println(string(b))
}
// Actual Output
[
  {"row_to_json":{"id":1,"name":"Beer"}},
  {"row_to_json":{"id":2,"name":"Wine"}},
  {"row_to_json":{"id":3,"name":"Liquor"}},
  {"row_to_json":{"id":4,"name":"Extras"}}
]
// Expected Output
[
  {"id":1,"name":"Beer"},
  {"id":2,"name":"Wine"},
  {"id":3,"name":"Liquor"},
  {"id":4,"name":"Extras"}
]

How do I use custom types or arrays ?

I have implemented a custom type the worked fine with sqlx, but I cannot get it to work here.

The error i'm getting is
panic: Could not find struct tag in type PerformerEntity: db:"aliases"`

The mapping

type PerformerEntity struct {
    Id            uint8 `json:"id"`
    ExternalId    uint8 `db:"external_id"json:"externalId"form:"uint8"`

    OriginService string `db:"origin_service"json:"originService"`
    OriginSection *string `db:"origin_section"json:"originSection"`

    StageName     string `db:"stage_name"json:"stageName"`
    Aliases       PGStringArray `json:"aliases"`

    UpdatedAt     time.Time `db:"updated_at"json:"updatedAt"`
    CreatedAt     time.Time `db:"created_at"json:"createdAt"`
}

Add Redis Caching

Implement caching using Redis.

type Cacher interface {
   // Cache caches the result of a Select or SelectDoc. If id is not provided, an FNV checksum
   // of the SQL is used as the id. (If interpolation is set, arguments are hashed). Use invalidate to 
   // immediately invalidate the cache to force setting its value.
    Cache(id string, duration time.Duration, invalidate bool)
}

// cache pre-defined game information for a week
DB.SelectDoc("*").
     From("games").
     Cache("games", 7 * 24 * time.Hour, false).
     QueryStruct(&games)

// cache user information for 30 seconds
DB.SelectDoc("*").
     From("users").
     Where("user_id = $1", id).
     Cache("", 30 * time.Second, false).
     QueryStruct(&user)

// invalidates cache for key "games"
DB.InvalidateCache("games")

// invalidates all 
DB.InvalidateCache(")

Referential Integrity errors log goroutines dying

Whenever I get a referential integrity error in dat (a FK violated) I get the following log:

SSSSSS19:57:49.958360 ERR dat:sqlx queryStruct.3
   err: pq: insert or update on table "offers" violates foreign key constraint "offers_game_id_fkey"
   sql: WITH upd AS ( UPDATE "offers" SET "game_id" = $1, "offer_template_id" = $2, "player_id" = $3 WHERE (id=$4) RETURNING "id","created_at","updated_at","claimed_at"), ins AS ( INSERT INTO "offers"("game_id","offer_template_id","player_id") SELECT $1,$2,$3 WHERE NOT EXISTS (SELECT 1 FROM upd) RETURNING "id","created_at","updated_at","claimed_at") SELECT * FROM ins UNION ALL SELECT * FROM upd
   args: $1=invalid-game $2=e6683585-2fb6-4dd9-8069-f4014a16638e $3=player-3 $4=00000000-0000-0000-0000-000000000000
goroutine 22 [running]:
runtime/debug.Stack(0x10, 0xc42018cc20, 0xc0180)
	/usr/local/Cellar/go/1.7.4_2/libexec/src/runtime/debug/stack.go:24 +0x79
github.com/topfreegames/offers/vendor/github.com/mgutz/logxi/v1.(*HappyDevFormatter).getLevelContext(0xc42008fc60, 0x3, 0xc420329a40, 0x0, 0x0, 0x0, 0x0, 0xc420078c78, 0x5)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner/exec.go:77 +0x33c
github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner.(*Execer).queryStructFn(0xc420366000, 0x489060, 0xc42033ad20, 0x0, 0x0)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner/exec.go:395 +0x235
github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner.(*Execer).queryStruct(0xc420366000, 0x489060, 0xc42033ad20, 0xc42033ae01, 0xc4203270c0)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner/exec.go:355 +0x63
github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner.(*Execer).QueryStruct(0xc420366000, 0x489060, 0xc42033ad20, 0x4, 0xc42033adc0)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/mgutz/dat/sqlx-runner/execer.go:133 +0x97
github.com/topfreegames/offers/models.UpsertOffer.func1(0x594979, 0x594978)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/models/offer.go:62 +0x267
github.com/topfreegames/offers/models.(*MixedMetricsReporter).WithDatastoreSegment(0x0, 0x54e514, 0x6, 0x54e8e6, 0x6, 0xc42018d4c8, 0x0, 0x0)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/models/mixed_metrics_reporter.go:40 +0x183
github.com/topfreegames/offers/models.UpsertOffer(0x7b3900, 0xc420332240, 0xc42033ad20, 0x0, 0xc42004f588, 0x2afe3)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/models/offer.go:63 +0xbc
github.com/topfreegames/offers/models_test.glob..func4.3.3()
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/models/offer_test.go:135 +0xd1
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420258b40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:104 +0x8d
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc420258b40, 0xc42004f7b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0xe1
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc420290720, 0x7aa5c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:25 +0x64
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc42009cfc0, 0x0, 0x7aa5c0, 0xc420142c90)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:167 +0x4fd
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc42009cfc0, 0x7aa5c0, 0xc420142c90)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:118 +0xbe
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc42025b0e0, 0x597001)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:144 +0x262
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc42025b0e0, 0x19)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:61 +0xf0
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc420073090, 0x96f828, 0xc42009ca80, 0x55aaf9, 0x19, 0xc4202f6040, 0x1, 0x1, 0x7b0980, 0xc420142c90, ...)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:59 +0x23b
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7ab300, 0xc42009ca80, 0x55aaf9, 0x19, 0xc42018df00, 0x1, 0x1, 0x0)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0x27b
github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo.RunSpecs(0x7ab300, 0xc42009ca80, 0x55aaf9, 0x19, 0xc43476ccb6)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:188 +0x195
github.com/topfreegames/offers/models_test.TestApi(0xc42009ca80)
	/Users/heynemann/Dropbox/dev/go/src/github.com/topfreegames/offers/models/models_suite_test.go:24 +0x64
testing.tRunner(0xc42009ca80, 0x596db8)
	/usr/local/Cellar/go/1.7.4_2/libexec/src/testing/testing.go:610 +0x81
created by testing.(*T).Run
	/usr/local/Cellar/go/1.7.4_2/libexec/src/testing/testing.go:646 +0x2ec

Any ideas on what's going on? It makes me uneasy to see that whenever a referential integrity error happens.

New v1 tag

The latest tag on the v1 branch was created on January, and some work has been done since then, which can't be used. In fact, gopkg locks the package at the latest known tag.

This prevents me from being able to access the Interpolate func in the runner package for instance.

undefined: runner.LogQueriesThreshold

hi, man
There are two problem in README Getting Started part

  • UpdatedAt dat.Nulltimedb:"updated_at"should beUpdatedAt dat.NullTime db:"updated_at" the t should be Uppercase.
  • when i try to run demo code in Getting Start, i get the error: undefined: runner.LogQueriesThreshold

env: mac 10.10 golang1.4.2

Why the custom interpolation logic for `dat.NullTime`?

I was trying to figure out why dat.NullTime instances were being inserted as the 0 value instead of NULL when EnableInterpolation is true. It turns out it's because interpolating prefers the Interpolator interface to the Valuer interface, and dat.NullTime implements this here: https://github.com/mgutz/dat/blob/v1/types.go#L264.

Besides the fact that it should definitely return NULL if the time is not valid, I'm not sure why it implements the interpolator interface in the first place. Is the default serialization of time.Time not sufficient? If you remove that function, NULLs work correctly and times get serialized, but I'm unclear if they're being serialized in the same manner or not.

Glide requirement causing major issues

This is more a feature request than an issue, but I wish you'd remove the requirement for Glide. You should leave the vendoring up to the application utilizing your package, since vendoring isn't "inherited".

You've utilized Glide-specific functionality in your code (subpackages), which means if our application doesn't use Glide (it doesn't), we have to take extra manual steps which aren't easily reproducible locally for development and updates or remotely during continuous integration, etc.

Returning("*") error

When performing an Update or Insert with Returning(*) the error pq: column "*" does not exist is thrown.

godo test fails without hstore

jack@hk-2~/dev/go/src/github.com/mgutz/dat$ godo test
PASS
ok      github.com/mgutz/dat    0.010s
2015/02/25 07:53:54 Failed to execute statement:
                CREATE TABLE people (
                        id SERIAL PRIMARY KEY,
                        amount money,
                        doc hstore,
                        email text,
                        foo text default 'bar',
                        image bytea,
                        key text,
                        name text NOT NULL,
                        created_at timestamptz default now()
                )
          Got error:  pq: type "hstore" does not exist
exit status 1
FAIL    github.com/mgutz/dat/sql-runner 0.031s
2015/02/25 07:53:55 Failed to execute statement:
                CREATE TABLE people (
                        id SERIAL PRIMARY KEY,
                        amount money,
                        doc hstore,
                        email text,
                        foo text default 'bar',
                        image bytea,
                        key text,
                        name text NOT NULL,
                        created_at timestamptz default now()
                )
          Got error:  pq: type "hstore" does not exist
exit status 1
FAIL    github.com/mgutz/dat/sqlx-runner        0.015s
test 3098ms

Perhaps godo createdb should create extension hstore.

Doesn't support schemas

When using schemas, table names are quoted incorrectly by dat in the builders. For example:

db.Update("hello.world").Set("name", "John Doe").Where("id = $1", 23).Exec()

Generates a query that looks like:

update "hello.world" set "name" = 'John Doe' where "id" = 23

It should be:

update "hello"."world" set "name" = 'John Doe' where "id" = 23

Support for scoped queries

The query syntax is pretty nice, but it would be nice to not have to every expression as a string. Specifically, for the Where().

What do you think of this form:

err = sess.Select("*").From("posts").Where(dat.Cond{"id >", 100}).QueryStructs(&posts))

dat.Cond is defined as:

type Cond map[string]interface{}

and the cool thing is you can easily have cond := dat.Cond{} and then do cond["field"] = "val"

and then build the condition string as you build the entire query.

if .Where() accepted an ...interface{} then both forms could easily be supported. Or leave .Where() and add .Scope() or .WhereCond(), ...

also, is Select("*") implied?

logxi

Hello,

after yesterdays commit with logxi, we can no longer build our application.

It says:
gopkg.in\mgutz\dat.v1\init.go:7:2: no buildable Go source files in .....\github.com\mgutz\logxi

We tried deleting all packages and installing them again, updating, etc. Same error everytime.

Could you check it out please?

Thank you

What's the state of v2?

First of all, thanks for the logxi library! Been using it for quite some time now.

Dat looks very promising since I'm already using both logxi and sqlx, but what's the state of v2? Should I start using it right away or should I wait?

Best regards, Christoffer

No buildable Go source files

Probably a noob question, but I'm getting this when trying to get started with this package:

vendor/gopkg.in/mgutz/dat.v1/init.go:7:2: no buildable Go source files in /Users/trustmaster/go/src/github.com/mgutz/logxi

Seems like logxi needs a versioned import there.

Use dependency management solution

With vendor/ on by default in Go 1.6, it would be really convenient if dat used it for it's dependencies. We use glide for our dependency management, which will trace our dependencies (including dat) if they implement a vendor folder, or use Glide, godep, GB or GPM.

Why not have interface slice type for QueryStructs?

@mgutz

Hi thanks for the excellent lib. I was wondering if there's a reason why Execer -> QueryStructs accepts interface{} instead of []interface{}. It should be pretty safe to assume a slice should always be passed to this method and I originally had mistyped it as just QueryStruct leading to a quite confusing error at runtime. If we can detect this type of mistake at compile time it will be much better.

Disabling logger makes errors become nil, and generates nil pointer dereferences

We had to disable error logging coming from dat- especially since some queries are extremely long and we don't want to print them when a query crashes. Doing so started generating pointer dereferences in sqlx-runner/execer.go when trying to access res.RowsAffected, meaning res is nil.
Doing some digging, it turns out the exec function is returning nil for both values, even when an error should be returned. In fact, logSQLError now seems to be returning nil, even if the parameter isn't.

Our environment variable is set to LOGXI=dat*=OFF.

I'm not sure if this is an intended behavior of the logger, but it is generating some extremely bad side-effects on our production environment.

pg dat.NullTime still causes 'cannot parse time errors' (Different format for timestamps without timezone)

dat_test=# create table test (time timestamp without time zone NOT NULL DEFAULT (now() AT TIME ZONE 'utc'), name TEXT NOT NULL);
CREATE TABLE
dat_test=# insert into test (name) VALUES('Hello');
INSERT 0 1
dat_test=# select row_to_json(fields.*) FROM (SELECT * FROM test) as fields;
                     row_to_json                      
------------------------------------------------------
 {"time":"2016-03-02T10:30:26.256081","name":"Hello"}
(1 row)

It seems like that postgres, at least with version 9.5, if using a row without timestamp without time zone
reports the data in the above format

When adding "2006-01-02T15:04:05.000000" to the supported formats, in types.go, it works

Edit: Additional note: this happens when using SelectDoc only of course but I think that's clear to you when you see the row_to_json

Is SQL injection possible?

If i am running a query like below, is sql injection possible?

var json []byte
json, _ = con.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,
    "mario",
).QueryJSON()

If yes, how to modify this query to prevent SQL injection?

Thanks for this package. It has saved me lot of boilerplate code.

MustPing with timeout

Hey @mgutz do you think you could make MustPing receive an optional parameter (with ...) to specify a timeout?

The issue I'm trying to avoid is that upon launching my servers, if there's no connectivity to a database, they'll just stay there trying indefinitely to connect, silently. The behavior I would expect of a well-behaved server is to fail with the proper error ("Could not connect to database at somethingsomething:3845" or something like that). That would trigger our error handling infrastructure and we'd be notified.

Do you think this is something you'd like to support? I can contribute with a PR if you'd like.

maxLookup index out of range on update

Hi,

Upon adding a new column to a table that was already large I started experiencing index out of range at

https://github.com/mgutz/dat/blob/v1/update.go#L178

It seems the condition that triggers hand off between preallocated parameter conversions and ad hoc conversions may be off by one. If I change i < maxLookup to i < maxLookup - 1 everything works fine but not sure if that's right, as the last element in equalsPlaceholderTab will not be used. Also, is it right that equalsPlaceholderTab starts with $0?

I wrote a little snippet to test for various values of maxLookup (https://github.com/mgutz/dat/blob/v1/init.go#L19):

type TestStruct struct {
	ID     int64  `db:"id"`
	Field1 string `db:"field1"`
	Field2 string `db:"field2"`
}

func TestBreak() {
	DB.SQL(`CREATE TABLE testtable (id serial, field1 text, field2 text);`).Exec()
	DB.SQL(`INSERT INTO testtable (field1, field2, field3) VALUES ('a', 'b');`).Exec()

	record := TestStruct{}
	_, err := DB.
		Update("testtable").
		SetWhitelist(record, "*").
		Where("id = $1", 1).
		Exec()
	if err != nil {
		panic(err)
	}

	DB.SQL(`DROP TABLE testtable;`).Exec()
}

This breaks for maxLookup <= 3 but works for larger values.

Cheers,

Luis

SelectDoc with no results returns database error

DB

-- SQL Setup
CREATE TABLE product_prototypes (
  id bigserial PRIMARY KEY,
  name text NOT NULL
);

Running (select with no results)

package main

import "gopkg.in/mgutz/dat.v1/sqlx-runner"

func main() {
    db := runner.NewDBFromString("postgres", "postgres://localhost/dat_test?sslmode=disable")

    b, err := db.SelectDoc("id, name").From("product_prototypes").QueryJSON()
    if err != nil {
        panic(err)
    }

    println(string(b))
}

returns a DB error
ErrNoRows == errors.New("sql: no rows in result set")
http://golang.org/pkg/database/sql/

Is this intentional? Would think you'd want to not perform the scan if the result set was empty

dat.Null* helpers

Writing things like dat.NullString{sql.NullString{"foo", true}} is a tad unwieldy, having helpers like dat.NullStringFrom("foo") would be awesome (inspiration from https://github.com/guregu/null).

I can have a PR for this in a few days if you think its a good idea,

Avoid use of panic

I have some tests that verify my application behaves properly when there are database issues. I found that on head these tests not only fail but end the test run due to the call to a call to logger.Fatal (

logger.Fatal("Could not query Postgres version")
), which eventually calls panic in logxi.

It's recommended to avoid exposing panics to consumer of a package and instead return an error so the consumer can choose how to deal with it. https://github.com/golang/go/wiki/PanicAndRecover#usage-in-a-package

On a related note, I noticed there are also some uses of log.Fatalf in sqlx-runner/db.go and sqlx-runner/tx.go. These call os.Exit(1) and should be avoid for similar reasons.

Remove dat.ErrNotFound

Half the query functions in dat use the standard "sql.ErrNoRows" and half use "dat.ErrNotFound". Could you please just standardize on "sql.ErrNoRows" and drop "dat.ErrNotFound", so we can stop having to guess which is which or testing for both all the time?

unsupported type []interface {}, a slice

Hi, I have issue with

// EnableInterpolation = true
data := []interface{}{1,"hello", true} // some data
db.Exec("INSERT INTO table (a, b, c) VALUES ($1, $2, $3)", data)
=> err: sql: converting Exec argument #0's type: unsupported type []interface {}, a slice

using driver github.com/jackc/pgx/stdlib, package gopkg.in/mgutz/dat.v2/dat
maybe someone was had this error, so can help me. Also it will be cool func .Values() to can handle slice

Outdated test instructions

I would have submitted #5 with a test case, except the "github.com/mgutz/godo/v2" repo is private so I cannot run the tests.

Panic in Exec() method

I use custom exception in my db (postgresql) like this

get diagnostics n = row_count;

if n = 0 then raise exception 'banjo_operations_delete(): operation [%] does not exist', id; end if;

and this is my backend call

if _, err := rcv.db.SQL(sql, id, c.Name, c.Title).Exec(); err != nil { return err }

and this raise panic err. Can You tell me what's the problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.