Giter Club home page Giter Club logo

kysely's Introduction

Stand With Ukraine

Discord Tests License Issues Pull Requests Downloads Bundle Size Social

Kysely (pronounce “Key-Seh-Lee”) is a type-safe and autocompletion-friendly TypeScript SQL query builder. Inspired by Knex.js. Mainly developed for Node.js but also runs on all other JavaScript environments like Deno, Bun, Cloudflare Workers and web browsers.

Kysely makes sure you only refer to tables and columns that are visible to the part of the query you're writing. The result type only has the selected columns with correct types and aliases. As an added bonus you get autocompletion for all that stuff.

As shown in the gif above, through the pure magic of modern TypeScript, Kysely is even able to parse the alias given to pet.name and add the pet_name column to the result row type. Kysely is able to infer column names, aliases and types from selected subqueries, joined subqueries, with statements and pretty much anything you can think of.

Of course there are cases where things cannot be typed at compile time, and Kysely offers escape hatches for these situations. See the sql template tag and the DynamicModule for more info.

All API documentation is written in the typing files and you can simply hover over the module, class or method you're using to see it in your IDE. The same documentation is also hosted here.

If you start using Kysely and can't find something you'd want to use, please open an issue or join our Discord server.

Getting started

Please visit our documentation site kysely.dev to get started. We also have a comprehensive API documentation hosted here but you can access the same documentation in your IDE by hovering over a class/method/property/whatever.

Contributors


Want to contribute? Check out our contribution guidelines.

Powered by Vercel

kysely's People

Contributors

anirudh1713 avatar codetheweb avatar collinstevens avatar davesborges avatar elderapo avatar ericnr avatar ethanresnick avatar fhur avatar gittgott avatar hannesj avatar igalklebanov avatar jaylmiller avatar jlarmstrongiv avatar knvi avatar koskimas avatar markuswendorf avatar naorpeled avatar neographer avatar rafaumlemos avatar samsouder avatar schusovskoy avatar seveibar avatar sheepolution avatar shiyuhang0 avatar tgriesser avatar thdxr avatar timclark97 avatar virtuallyunknown avatar waynebloss avatar wirekang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kysely's Issues

AWS Data API

I work on ServerlessStack and I think this is the missing piece to make working with serverless relational databases a breeze. Let me explain a bit:

Prisma has gotten very popular and comes up a lot with our community. Typed query builders are definitely super helpful. However, their architecture involves shipping a rust binary that does the connection management and the node library talks directly to it.

This is a pretty crappy model for serverless since each function will be running that binary and you run into all kinds of connection pool issues. Prisma is launching their own proxy service that you can talk to over HTTP so the serverless functions don't have to run up the binary themselves. But this is another service to manage/pay for that adds latency.

AWS has a their own version of this called Data API built into their relational DB offering. Prisma will not integrate with this as it runs counter to their strategy and architecture. I'm planning on adding Data API support to this library which means we'll finally have a nice typed client for AWS RDS without worrying about connection management

Creating this issue so I can track my work

Referentially transparent API

An expression is called referentially transparent if it can be replaced with its corresponding value (and vice-versa) without changing the program's behavior.

It would simplify Kysely usage and unlock further optimizations. And it's little change I believe.
Take a look how I'm using Kysely in React:

const { data, refresh } = useThingsDB(
  useCallback((db) => db.selectFrom('thing').selectAll().execute(), []),
);

useThingsDB is React Hook (composed from other hooks but that's irrelevant now), which takes Kysely query as a parameter. Note useCallback usage. It makes callback stable because it must be stable.

What am I proposing is toString overload that would serialize Kysely toString and fromString that would deserialize it. With such API, it would be easy to cache etc. Kysely queries.

const { data, refresh } = useThingsDB((db) =>
  db.selectFrom('thing').selectAll(),
);

What do you think?

Temporary Tables

Hello! I was curious how you'd advise implementing temporary tables with this package, or if there's support for it.

Add .ref() method to expressionbuilder

As previously discussed.

A .ref("column") method on the expressionbuilder would allow us to use references to other columns in updates, on conflict do update clauses, and in other areas where expressionbuilder is used, in a type safe manner.

Examples:

db.insertInto('person')
  .values(person)
  .onConflict((oc) => oc
    .column('id')
    .doUpdateSet({
      name: (eb) => eb.ref('excluded.name')
    })
  )
db.updateTable('pet')
.innerJoin('person', 'person.id', 'pet.owner')
.set({
  ownerName: (eb) => eb.ref('person.name')
});

Edit: Forgot to mention, the other benefit of this over .raw() is that with this we can pass in the column names we're using elsewhere, instead of having to remember to snake_case them first (if we're using the camelcase plugin).

How to filter is not null?

I have tried the following

.where('jtdPhoneLogCustomer.id', 'is not', 'null')
.where('jtdPhoneLogCustomer.id', 'is not', null)

But none of them work.

...
.where('user.id', 'is not', 'null')

compiled to

// sql
select from
...
where "user"."id" is not $1

// binding
[ 'null' ]

Looks like the problem is kysely pass null as parameter

Maybe we should add another chain method whereNull and whereNotNull similar to knex

.whereNotNull('user.id')

Feature Request: nice to have `executeTakeFirstOrThrow`

async function demo() {
  const person = await db
    .selectFrom('person')
    .innerJoin('pet', 'pet.owner_id', 'person.id')
    .select(['first_name', 'pet.name as pet_name'])
    .where('person.id', '=', 1)
    .executeTakeFirst()

  if (person) {
    person.pet_name
  }
}
async function demo() {
  const person = await db
    .selectFrom('person')
    .innerJoin('pet', 'pet.owner_id', 'person.id')
    .select(['first_name', 'pet.name as pet_name'])
    .where('person.id', '=', 1)
    .executeTakeFirstOrThrow()

  person.pet_name
}

It can be cumbersome to check whether return value is undefined, nice to have executeTakeFirstOrThrow chain method.

what do you think?

SSL connection

Hi koshimas.

I tried to migrated some personal scripts to kysely and noticed that SSL is not support yet.

I'm using postgres on Heroku, and they require connect over SSL.

It would be nice we could pass more options to node-pg https://node-postgres.com/features/ssl

[Bug] camelcase plugin does not work inside transaction

db
  .insertInto('person')
  .values(
    persons.map<Partial<db.main.PersonInsertable>>((member) => ({
      source: '...',
      type: '...',
      createdAt: new Date()
    })
  .compile()

Current behavior

without transaction

insert into "person" ("source", "type", "created_at", ...

with transaction

insert into "person" ("source", "type", "createdAt", ...

Expected behavior

without transaction

insert into "person" ("source", "type", "created_at", ...

with transaction

insert into "person" ("source", "type", "created_at", ...

Migrator shouldn't sort migrations

Currently, the returned record will be sorted.
There are several problems:

  • One would need a proper naming scheme (e.g: ISO8601 prefix) and a process to enforce it.
    It is doable but that's more work.
  • This does not allow a custom MigrationProvider to provide its own ordering scheme (e.g: linked list).

In other tools such as alembic, a migration has a reference to a previous migration.
This allows the tool to sort migrations independent of file names.
This also provides another benefit: in a project with multiple developers, a situation where "forks" appear in the migration history can be easily detected: The history would no longer be a linear chain.

My proposal:

  1. For backward compatibility, the current behavior would still be supported.
  2. Extend the return type of MigrationProvider.getMigrations to Record<string, Migration> | Migration[], or, if a name is needed for internal state tracking: Record<string, Migration> | NamedMigration[] where NamedMigration = Migration & { name: string }.
  3. When Migrator gets a NamedMigration[], it wouldn't sort and immediately use the array as is.
  4. Change the compatibility code path of Migrator to generate NamedMigration[] from Record<string, Migration>.
  5. Make FileMigrationProvider do the same.
  6. Eventually remove Record<string, Migration> in the next major version.

MySQL transaction does not work yet?

Am i doing something wrong? I saw you have tests for mysql too, not sure why this do not work.

SELECT VERSION();

10.2.26-MariaDB
{
  "name": "mysql",
  "dependencies": {
    "kysely": "0.7.5",
    "mysql2": "2.3.2"
  }
}
import { Kysely, MysqlDialect } from 'kysely';

(async function fn() {
  const db = new Kysely({
    dialect: new MysqlDialect({
      // credentials
    }),
  });

  await db.transaction().execute(async (tx) => {
    console.log('INSIDE TRANSACTION');
  });
})();

/Users/[email protected]/node_modules/kysely/dist/cjs/dialect/mysql/mysql-driver.js:40
return new Promise((resolve, reject) => {
^
TypeError: Cannot read properties of undefined (reading 'getConnection')
at /Users/..../[email protected]/node_modules/kysely/dist/cjs/dialect/mysql/mysql-driver.js:41:24
at new Promise ()
at MysqlDriver.#acquireConnection

Broken links

The following links from the README leads to a 'Page not found' page. It may have been moved elsewhere - where can I find the examples?

# Query examples

## Select queries

You can find examples of select queries in the documentation of the 
[select method](https://koskimas.github.io/kysely/classes/QueryBuilder.html#select) and
the [where method](https://koskimas.github.io/kysely/classes/QueryBuilder.html#where) 
among other places.

## Update queries

See the [set method](https://koskimas.github.io/kysely/classes/QueryBuilder.html#set) and the
[updateTable method](https://koskimas.github.io/kysely/classes/Kysely.html#updateTable)
documentation.

## Insert queries

See the [values method](https://koskimas.github.io/kysely/classes/QueryBuilder.html#values) and the
[insertInto method](https://koskimas.github.io/kysely/classes/Kysely.html#insertInto)
documentation.

Thank you!

Doing bulk queries with .raw() or something similar

Right now I'm doing a couple of bulk queries (user input) using the sqlite client directly.

I'd like to incorporate this into kysely so i can hide my connection object entirely, and take advantage of kysely for locking/transactions. However I have 2 issues:

  • Kysely only executes the first query I pass in using .raw().execute(). Standard behavior for sqlite with the methods the driver calls on the sqlite object.
  • Even if this did work (I could modify the driver) the return types would not match. Sqlite kicks back an object that looks something like this for bulk queries (using exec()):

{ columns: string[], values: any[][] }[]

This obviously has 2 issues. 1 is that it may contain more than one result set, whereas kysely only expects one. The other is that if I run a query and need to know the column names, this will work, whereas kysely's standard type is an array of objects, so I won't know the column names if the query has no results.

Is this something you'd consider incorporating? Maybe a separate bulk() method, similar to the raw method but with a different response type. As far as I can tell it could be done, but would require modifying all of the drivers to either change the response type to
{ columns: string[], values: any[][] }[] and then have kysely convert it to object[] outside of the driver for regular queries, or adding a separate executeBulkQuery() method to each driver's DatabaseConnection. Not sure if it's worth it, but otherwise anyone who wants to do bulk queries has to move the connection/locking/transaction logic outside of the kysely driver.

I'm also not sure what format postgres and mysql give back by default so that may be an issue as well.

Allow passing in db interfaces for insert operations

Currently Kysely relies on the user passing in db.generated when inserting not null columns with default values since it only knows these columns are not null. If we could pass in separate interfaces for insert operations this process would be more type-safe.

Here's an idea for the api

interface Person {
  id: number
  first_name: string
  last_name: string
  gender: 'male' | 'female' | 'other'
}

interface Database {
  person: Person
}

interface InsertablePerson {
  // id is generated as identity
  id?: number
  first_name: string
  last_name: string
  gender: 'male' | 'female' | 'other'
}

interface InsertableDatabase {
  person: InsertablePerson
}

// Insertable interfaces are passed to the second type param.
const db = new Kysely<Database, InsertableDatabase>({
  dialect: new PostgresDialect({
    host: 'localhost',
    database: 'kysely_test',
  })
})

Some libraries that generate db table interfaces, for example kanel, also generate interfaces for insert operations, which accounts for generated values. It'd be very much useful if we could pass those interfaces to Kysely.

Is the compile step necessary?

Is there a design decision I'm not aware of that requires chainable functions API?
It seems typed SQL should be possible with Template Literal Types only.

https://twitter.com/c_pick/status/1307433762914009090
https://github.com/codemix/ts-sql

There is nothing wrong with the current API, I am just curious since I am using such "selectors". The problem with "selector" is that it requires React useCallback Hook because the callback is not referentially transparent. So I am asking myself whether ts-sql is possible with kysely requirements.

const { data } = useContactsDB(
  (db) => db.selectFrom('contact').selectAll().execute(),
);

Missing pg array operator (&&)

&& operator seem to be missing in where

Can still use raw('&&') so not a real problem

Also any plan to add andWhere and make it take an array for non-dynamique query ?
Where and querybuilder callback is great but kind of verbose for simple case

Thanks for the lib 👍

Node-less version

I know it's not Kysely goal to be multi-platform aka universal, still, I believe it would be a very nice addition with relatively minimal effort. With the advent of local-first software, especially awesome SQLite, it would be nice to be able to use Kysely in browsers or React Native without hacks.

I'm opening an issue because this can be helpful for someone else. Maybe it should be mentioned in the readme.

webpack: (config) => {
  // Remove kysely server deps.
  config.resolve.fallback = {
    ...config.resolve.fallback,
    mysql2: false,
    pg: false,
    crypto: false,
    path: false,
    fs: false,
  };
  return config;
},

From the architectural point of view, I don't see a reason why a typed string builder should depend on Node.js at all. The single-responsibility principle FTW.

Decouple migrations from storage

Current migration solution assumes a list of files exists on disk.

This can be problematic when bundling the project into a single file (e.g. using esbuild or webpack). Bundler follows import or require statements and combines the code into a single file. Since we are using file system reads, those files will not be bundled, as bundlers do not follow fs.read* operations, or even dynamic import or require statements.

The solution, I think, is to decouple the migrations solution into a common interface (e.g. MigrationProvider) that returns an array of migrations.

interface MigrationProvider {
  migrations: Promise<Migration[]>
}

Each concrete implementation would then implement it and we can have interchangeable migration loaders.

Current solution would become FileSystemMigrationProvider:

class FileSystemMigrationProvider implements MigrationProvider {
  async migrations(): Promise<Migration[]> {
    // return fs.read()...
  }
}

For bundling use cases, there would be another one:

// maybe a bad name, but best I can think of at the moment
class InMemoryMigrationProvider implements MigrationProvider {
  constructor(private readonly migrations: Migration[]) {}

  async migrations(): Promise<Migration[]> {
    return Promise.resolve(this.migrations)
  }
}

Which will then simply import all migrations and supply them into the InMemoryMigrationProvider constructor. Which will then force the bundler to consider these files and bundle them into a single file.


TypeORM follows a similar pattern, and allows providing an array of migration classes as a config option.

`insert into A select * from A`?

It doesn't seem possible to chain db.insertInto("A").selectFrom("A").selectAll().

It that possible in some other way? A workaround is to first fetch the rows from A and then insert them, however that incurs an extra roundtrip to the database.

Missing foreign key constraints

Hey! Really impressive project.

I'm not able to fully convert my Knex migrations into Kysely because the following PostgreSQL foreign key constraints don't seem to be supported yet as of [email protected].

col
  .references('foo.id')
  .onDelete('restrict')
            ^^^^^^^^^^
  .onUpdate('cascade')
   ^^^^^^^^

Allow subQuery in `.values()`

Currently, it doesn't seem possible to do something like

db.with("A", db => {
  db.selectFrom("B").selectAll().where("id", "=", 1)
}).insertInto("C").values({
  foo: qb => qb.subQuery("A").select("id")
});

It should be possible to work around this using raw, but it's not type-safe. But perhaps the InsertValueExpression type could be modified to also accept subqueries?

Support Postgres generated column

I didn't found a way to use Postgres generated column or to alter a table to add a generated column (no view/materialized view either)

My use case is storing a tsvector representation of another column to allow faster searching with a gin index

Link to the doc just in case

Dedupe identical joins

Consider you have two optional where statements you need add for a query, both requiring the same join, but you don't want to add the join if it's not needed. It's doable like this

let qb = ctx.db.selectFrom('person')
let filterA: string | undefined = 'a'
let filterB: string | undefined = 'b'

if (filterA) {
  qb = qb
    .innerJoin('pet', 'person.id', 'pet.owner_id')
    .where('pet.name', '=', filterA)
}

if (filterB) {
  qb = qb
    .innerJoin('pet', 'person.id', 'pet.owner_id')
    .where('pet.name', '=', filterB)
}

const res = await qb.selectAll('person').execute()

But it produces two identical joins if both filters are applied and causes an error in the db. You could do this

let qb = ctx.db.selectFrom('person')
let filterA: string | undefined = 'a'
let filterB: string | undefined = 'b'

if (filterA || filterB) {
  let qb2 = qb.innerJoin('pet', 'person.id', 'pet.owner_id')

  if (filterA) {
    qb2 = qb2.where('pet.name', '=', filterA)
  }

  if (filterB) {
    qb2 = qb2.where('pet.name', '=', filterB)
  }

  qb = qb2
}

const res = await qb.selectAll('person').execute()

but is that enough?

Suggested solution

The first example would work if kysely ignored identical joins (but the types would still change of course).

SQLite

This project is so awesome, thank you!

How can I help with SQLite?

Generics partially working on column names - but not values.

I've gotten generics partly working. Typescript will infer column names. However on values, it does not infer types.

For example, trying to do query.where("field", "=", "value"), the column name works correctly. It knows which columns are valid and which are not. However, for the value parameter, there is no data type that will work.

Here's some code which explains more clearly what works and what doesn't.

import { SelectQueryBuilder, UpdateQueryBuilder, UpdateResult } from "kysely";

type TestMeta = {
    isDirty: boolean
    hasBeenPersisted: boolean
    syncMessage: string
}
type SyncDef<DB extends { [key in K]: TestMeta }, K extends keyof DB> = {
    select: () => SelectQueryBuilder<DB, K, {}>
    update: () => UpdateQueryBuilder<DB, K, UpdateResult>
}

async function testSyncDef<DB extends { [key in K]: TestMeta }, K extends keyof DB>(def: SyncDef<DB, K>) {

    //test.isDirty is a boolean - GOOD!
    const test: DB[K] = undefined!;
    const dirty = test.isDirty;

    //these columns work - GOOD!
    def.select().whereRef("isDirty", "=", "hasBeenPersisted")

    //this gives an error on the non-existent column name - GOOD!
    def.select().where("doesNotExist", "=", "X");

    //using select
    //results are fine - GOOD!
    const isDirtyResults = await def.select().select(["isDirty", "hasBeenPersisted"]).execute();
    isDirtyResults[0].isDirty;
    isDirtyResults[0].hasBeenPersisted;

    //using selectAll
    //results are a promise of array of UnionToIntersection<DB[K]>
    //results does not have isdirty, even though DB[K] does? maybe because of UnionToIntersection type? - NOT GOOD
    const results = await def.select().selectAll().execute();
    results[0].isDirty;

    //Argument of type '{ isDirty: true; }' is not assignable to parameter of type 'MutationObject<DB, K>
    //column names are recognized, but it works when all values are undefined - no types work, even null (probably related to the below error)
    await def.update().set({
        isDirty: undefined,
        hasBeenPersisted: undefined,
        syncMessage: undefined,
        xxx: undefined,//gives an error for non-existent column - good!
    })

    //this gives an error on the "true" - last arg - not good - there is no data type that works here
    //Argument of type 'true' is not assignable to parameter of type 'FilterValueExpressionOrList<DB, K, "isDirty">'
    def.select().where("isDirty", "=", true);

}

A few ideas and thank you

Hi there, I have been trying this lib this weekend. By far, this is the most powerful and lightweight type-safe sql query builder IMO. (compares to knex, zapatos and slonik)

I'm using prisma2 in production but would love to migrate some of the queries to kysely once it become stable.

I love the amazing work you have done and I think this work could be improved if it could support stricter type.

Assuming i have the following DDL

CREATE TABLE person
(
  id                  INTEGER GENERATED BY DEFAULT AS IDENTITY
    CONSTRAINT person_pk
      PRIMARY KEY,
  created_at          TIMESTAMP WITH TIME ZONE DEFAULT NOW()            NOT NULL
  name                text                                              NOT NULL
)

I would expect the following code throw because name non-nullable

interface Person {
  id: number
  created_at: string
  name: string
}

interface Database {
  person: Person
}

const db = new Kysely<Database>

await db.insertInto('person').values({})

The current behavior typescript does not raise exception because the method value shape is Partial<Model>.

export type MutationObject<DB, TB extends keyof DB> = {
    [C in keyof DB[TB]]?: MutationValueExpression<DB[TB][C]>;
};

So, Instead of passing single database model definition to new Kysely<Models>, it would be nice to allow user pass both selectable definition and insertable definition in the future. Then we could get stricter type for both insert and update mutation.

interface PersonSelectable {
  id: number
  created_at: string
  name: string
}

interface PersonInsertable {
  id?: number
  created_at?: string
  name: string
}

interface DatabaseSelectable {
  person: PersonSelectable
}

interface DatabaseInsertable {
  person: PersonInsertable
}

const db = new Kysely<DatabaseSelectable, DatabaseInsertable>

// it will throw error because name is required in PersonInsertable
await db.insertInto('person').values({})

Also, do you have any plan to build/integrate database typing generator into this library? I'm using patched version of sql-ts for now, it works great but it has some limitations. I could help on that if you like

Keep up the good work 👍

Is it possible to make RHS of `where` expression typesafe?

Not sure this is a feature request or bug, currently, the RHS of where expression is not aware of type.

interface Person {
  id: number;
  first_name: string;
  last_name: string;
  gender: 'male' | 'female' | 'other';
}

const db = new Kysely<{ person: Person }>({});


// this will throw in runtime because WHATEVER is not a valid enum value.
// is it possible to throw in compile time?

db.selectFrom('person')
  .selectAll()
  .where('gender', '=', 'WHATEVER'); 

related code

https://github.com/koskimas/kysely/blob/master/src/query-builder/query-builder.ts#L250-L254

I attempted to catch LHS type into a variable C, but without luck, since 'male' | 'female' | 'other' | string will widen the type to string.

where<C extends ReferenceExpression<DB, TB>>(lhs: C, op: FilterOperatorArg, rhs: ValueExpressionOrList<DB, TB, C>): QueryBuilder<DB, TB, O>;

export type ValueExpression<DB, TB extends keyof DB, C> = DB[TB][C] | PrimitiveValue | AnyQueryBuilder | QueryBuilderFactory<DB, TB> | RawBuilder<any> | RawBuilderFactory<DB, TB>;

export type ValueExpressionOrList<DB, TB extends keyof DB, C> = ValueExpression<DB, TB, C> | ValueExpression<DB, TB, C>[];

—-

update

if we could get the exact type of LHS, do we still need PrimitiveValue type?

0.9.4 broke browser compatibility

I am running Kysely in the browser and everything seems to work except the version 0.9.4 broke browser compatibility with this line:

const startTime = process.hrtime.bigint();

There is no process in browsers of course. There are two possible workarounds.

  1. Browser monkey-patching
  2. Replace process.hrtime with universal code.

Both can be done with https://github.com/kumavis/browser-process-hrtime.

The solution depends on your Kysely vision. If it's aimed to be Node.js only, then it's up to users to monkey patch the environment. But I suppose Kysely can be easily universal, e.g. used in React Native as well.

Kysely + Objection.js

Is there any intent of switching from Knex.js to Kysely in the future on Objection.js?

Nice work by the way.

RFC: Decorators support

Big Thanks to @koskimas for writing this amazing library.

Did kysely consider supporting class-based type definition? Different from interface, class support decorators. And it's which helps us to implement more powerful and flexibility features.

For example:

Typescript property decorators could set metadata for the class. This means we can use the KyselyPlugin to create plugins like auto-update updatedAt, encrypted fields, hashed password, etc.

And this can also be used to solve the problem in #5

This can make kysely more powerful than sql builder and more flexible than ORM

SQL Server support

I am really impress by Kysely and your work. Any plan for SQL Server dialect?

Using generics?

I'm trying to use generics to be able to pass around an entity definition, and use kysely to operate on it. Is this possible?

With this code, calling selectAll().execute() is typed as an array - (Person & Pet)[]. Would expect it to be ItemSchema[K][]

type Pet = {  
    ownerName: string  
}  
type Person = {  
    name: string  
}  
type ItemSchema = {
    persons: Person
    pets: Pet
}
type EntityDef<K extends keyof ItemSchema> = {
    table: TableExpression<ItemSchema, K>
}
function doStuff<K extends keyof ItemSchema>(db: Kysely<ItemSchema>, def: EntityDef<K>) {
    const results = db.selectFrom(def.table).selectAll().execute();
}

Improving the where's comparsion syntax.

const person = await db
    .selectFrom('person')
    .innerJoin('pet', 'pet.owner_id', 'person.id')
    .select(['first_name', 'pet.name as pet_name'])
    .where('person.id', '=', 1)
    .executeTakeFirst()

In this query, the where function where user needs to manually enter the '=' or other similar comparison operators in the middle, imo it does not look that good and many more improvements can be made to it.

So taking inspiration from Prisma, which tbf has a great query builder and the api that they have implemented.

It can be changed to something like this -

  const person = await db
    .selectFrom('person')
    .innerJoin('pet', 'pet.owner_id', 'person.id')
    .select(['first_name', 'pet.name as pet_name'])
    .where('person.id': {
    	gt: 1
    })
    .executeTakeFirst()
    ```
  You can look for more examples here - https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#get-all-post-records-where-likes-is-greater-than-9
 

Feature Request: `orderBy` clause allow raw builder

I'm in a situation where i need to use postgres only order by null first. Right now, order by only allow enum OrderByDirection.

it would be useful both parameters allow raw query.

usages:

select * FROM table ORDER BY abs(id)
select * FROM table ORDER BY id + 1
select * FROM table ORDER BY id null last

Feature request - result of migration run

Would it be possible for the migration function to return some information about the migrations performed? Even just a count of migrations executed would be helpful (we use this to automatically drop and recreate some views that are auto-generated based on the schema - but we dont want to do this if the schema hasn't changed).

Improve addColumn for SQLite

SQLite flexible typing requires special treatment.

A 'blob' data type and the ability to not specify type at all.

A column with affinity BLOB does not prefer one storage class over another and no attempt is made to coerce data from one storage class into another.

If the declared type for a column contains the string "BLOB" or if no type is specified then the column has affinity BLOB.

Maybe adding 'blob' type to types is enough.

AWS Lambda Node 14 Support?

@koskimas I was wondering (due to our heavy use of AWS services, including Lambda) why this package is heavily tied to Nodejs version 16?

Sadly, even though 16.x is the LTS, AWS is behind and doesn't support anything past 14.x without jumping through hoops which means we can't stay updated with the new features being added to Kysely.

Add documentation regarding best way to specify a nullable column

I see two sensible ways to specify type corresponding to a nullable SQL column:

interface Entity {
  attribute: string | null;
}
interface Entity {
  attribute?: string;
}

It's not clear in the documentation I was able to find which should be preferred when using kysely. In practice I see an advantage for each kind:

  • The benefit of the first version is that the type of attribute for select will be string | null, which keeps a nice mapping between a SQL-NULL and a JS-null. The second version results in string | undefined.
  • On the other hand, if you use the second version you can skip nullable columns when using insertInto. This is convenient. And for what its worth, the second option was what I intuitively tried first.

I'm not clear if there are other consequences for choosing one over the other. If either is fine, I think the pros/cons should be noted in the documentation. If one if preferred that should definitely be noted! :)

(The two options could be combined, or a union with undefined could be thrown in the mix to result in string | null | undefined (yuck), but I think it's obvious that the user ought to avoid that and so I want to focus on these two options for now.)

camelcase plugin does not work when column name include `_${number}`.

Example

selectFrom('company').select(['amountSpend_12m']).limit(1).compile()

// log: select "amount_spend_12m" from "company" limit $1


selectFrom('company').select(['amountSpend_12m']).limit(1).execute()

// log: [ { amountSpend12m: 0 } ]

expected // log: [ { amountSpend_12m: 0 } ]

How Colud I set foreign key on MySQL?

I'm not sure that the following migration sample code doesn't generate correct SQL about reference foreign key on MySQL.
https://github.com/koskimas/kysely#migrations

Environment

  • Kysely version: 0.8.1
  • DB Engine: MySQL5.7.35
await db.schema
  .createTable('pet')
  .addColumn('id', 'integer', (col) => col.increments().primaryKey())
  .addColumn('name', 'varchar', (col) => col.notNull().unique())
  .addColumn('owner_id', 'integer', (col) =>
    col.references('person.id').onDelete('cascade')
  )
  .addColumn('species', 'varchar')
  .execute()

I changed the column type from integer to bigint for MySQL.

The generated sql

CREATE TABLE `pet` (
  `id` bigint NOT NULL AUTO_INCREMENT PRIMARY KEY,
  `name` varchar(50) NOT NULL UNIQUE,
  `owner_id` bigint REFERENCES `person` (`id`) ON DELETE CASCADE,
  `species` varchar(50))	

One of expected SQL

CREATE TABLE `pet` (
  `id` bigint NOT NULL AUTO_INCREMENT PRIMARY KEY,
  `name` varchar(50) NOT NULL UNIQUE,
  `owner_id` bigint,
  `species` varchar(50),
  FOREIGN KEY (owner_id) REFERENCES person (id) ON DELETE CASCADE)

Log the query?

Hey, is there a way to log the generated query?

I tried this but it did not printed out the queries.

export const db = new Kysely<ModelTypeMap>({
  dialect: new PostgresDialect({
    host: "localhost",
    database: "postgres",
    password: "postgres",
    user: "postgres",
    log: (message) => {
      console.info(message);
    },
  }),
});

// Output

/*
checking client timeout
connecting new client
new client connected
pulse queue
no queued requests
[
  {
    id: 2,
    slug: 'a',
    name: 'a',
    last_updated: 2021-11-29T11:51:27.581Z,
    thumbnail_url: 'aa',
    default_source: 'a',
    status: 'ENABLED'
  }
]
remove idle client
*/

`preventAwait` does not forward exception

await this.databaseService.transaction(async (tx) => {
  throw new Error("oops")
}
    err: {
      "type": "Error",
      "message": "don't await TransactionBuilder instances directly. To execute the transaction you need to call the `execute` method",
      "stack":
          Error: don't await TransactionBuilder instances directly. To execute the transaction you need to call the `execute` method
    }

MySQL Support

Starting to run benchmarks using kysely against all the serverless options. Now that data-api is working next up is Planetscale which is just MySQL

What are your thoughts on a MySQL dialect?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.