augustnagro / magnum Goto Github PK
View Code? Open in Web Editor NEWA 'new look' for database access in Scala
License: Apache License 2.0
A 'new look' for database access in Scala
License: Apache License 2.0
Hi. I have next model:
@Table(PostgresDbType, SqlNameMapper.CamelToSnakeCase)
final case class Modules(
@Id id: Int,
title: String,
parentId: Int,
description: Option[String]
)
With next values:
Modules(1, "Module title", 2, Some("Module Description"))
And with next table definition:
id INT
title VARCHAR(255)
description VARCHAR(255)
course_id INT
And when I tried add entity using insertReturning
method I received error
Incorrect value for int: Module Description
I checked columns in my table definition and see problem with order of model attributes. I have to create ordering of case class properties like in database for working. Is it correct behavior ? IMHO, developer hasn't to think about ordering tables columns for creating dao layer
for example , sql like this "select * from table where id in (4,5,6,8) ".
sql"""select * from table where id in ($array) """
the array parameter is
val array = conn.createArray.....
or
val array = List[Int].mkString(",")
but they all failed
hi @AugustNagro, thanks for this amazing library!
Is it possible to decode JSONB columns from Postgres directly? Or should I query it as string and then decode it separately?
In README.md, please add the following reference link to the mentioning of the "N+1 Query Problem". It's for those of us who haven't seen this particular problem described using this vernacular before:
https://stackoverflow.com/questions/97197/what-is-the-n1-selects-problem-in-orm-object-relational-mapping
Magnum findAll returns zero results from H2 - when it should return a result of 1.
The following project - https://github.com/objektwerks/magnum - can be used via - sbt clean test - to reproduce this issue. Read the test console output for details. The project code will provide additional details.
I think it would be massively helpful to have a capability to extend DbCodec built-in so that one can provide an isomorphism between a type for which a DbCodec instance already exists (e.g. DbCodec[String]
) and a type that hasn't have an instance. As always - code is better than words so let me present the current workaround:
extension [E](codec: DbCodec[E])
def bimap[E2](r: E => E2, w: E2 => E): DbCodec[E2] =
new DbCodec[E2]:
override def queryRepr: String = codec.queryRepr
override def cols: IArray[Int] = codec.cols
override def readSingle(resultSet: ResultSet, pos: Int): E2 = r(codec.readSingle(resultSet, pos))
override def writeSingle(entity: E2, ps: PreparedStatement, pos: Int): Unit =
val e2 = w(entity)
codec.writeSingle(e2, ps, pos)
object opaques:
opaque type TestOpaque = String
object TestOpaque:
def apply(value: String): TestOpaque = value
extension (opaque: TestOpaque) def value: String = opaque
given DbCodec[opaques.TestOpaque] = summon[DbCodec[String]].bimap(opaques.TestOpaque(_), _.value)
The instance for TestOpaque can be easily created by just calling bimap
on existing instance and providing functions from String to TestOpaque and vice versa.
Hi,
I'm looking into how one can leverage work done by Magnum's reflection to help write future-proof queries in Repo subclasses. One thing that would be massively helpful would be if selectable and insertable columns (eElemNamesSql
and ecElemNamesSql
in RepoDefaults
macros respectively) would be available on either RepoDefaults
(one can always summon[RepoDefaults[...]]
) or even better, on ImmutableRepo
(selectable columns) and Repo
(insertable columns). I imagine you could return them in a custom collection type (a wrapper over Vector maybe?) that has an overridden toString()
so that it can format them correctly as in SqliteDbType:
val selectKeys = eElemNamesSql.mkString(", ")
val ecInsertKeys = ecElemNamesSql.mkString("(", ", ", ")")
You could expose that logic on DbType subtypes and then use it by catching proper DbType in macro just as you do now in RepoDefaults
and passing it to the custom collection instance to format columns on interpolation. This would allow user to filter out columns on demand while still providing correct set of columns driven by the structure of actual scala case classes and not via flaky query strings.
What do you think? Would you accept a PR with such functionality?
Hi,
There's also an error in README.md where insert
family has return type E
(not insert*Returning
!) while in the codebase they return Unit.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.