Giter Club home page Giter Club logo

aiosql's Introduction

aiosql - Simple SQL in Python

SQL is code. Write it, version control it, comment it, and run it using files. Writing your SQL code in Python programs as strings doesn't allow you to easily reuse them in SQL GUIs or CLI tools like psql. With aiosql you can organize your SQL statements in .sql files, load them into your python application as methods to call without losing the ability to use them as you would any other SQL file.

This project supports standard PEP 249 and asyncio based drivers for SQLite (sqlite3, aiosqlite, apsw), PostgreSQL (psycopg (3), psycopg2, pg8000, pygresql, asyncpg), MySQL (PyMySQL, mysqlclient, mysql-connector), MariaDB (mariadb) and DuckDB (duckdb), out of the box. Note that some detailed feature support may vary depending on the underlying driver and database engine actual capabilities.

This module is an implementation of Kris Jenkins' yesql Clojure library to the Python ecosystem. Extensions to support other database drivers can be written by you! See: Database Driver Adapters. Feel free to pull request!

Badges

Build status

Code Coverage

Tests

Issues

Contributors

Pypi Downloads

Stars

Version

Code Size

Databases

Drivers

Language Count

Top Language

Python Versions

Badges

BSD 2-Clause License

Usage

Install from pypi, for instance by running pip install aiosql.

Then write parametric SQL queries in a file and execute it from Python methods, eg this greetings.sql file:

-- name: get_all_greetings
-- Get all the greetings in the database
select greeting_id, greeting
  from greetings
 order by 1;

-- name: get_user_by_username^
-- Get a user from the database using a named parameter
select user_id, username, name
  from users
 where username = :username;

This example has an imaginary SQLite database with greetings and users. It prints greetings in various languages to the user and showcases the basic feature of being able to load queries from a SQL file and call them by name in python code.

You can use aiosql to load the queries in this file for use in your Python application:

import aiosql
import sqlite3

queries = aiosql.from_path("greetings.sql", "sqlite3")

with sqlite3.connect("greetings.db") as conn:
    user = queries.get_user_by_username(conn, username="willvaughn")
    # user: (1, "willvaughn", "William")

    for _, greeting in queries.get_all_greetings(conn):
        # scan [(1, "Hi"), (2, "Aloha"), (3, "Hola"), …]
        print(f"{greeting}, {user[2]}!")
    # Hi, William!
    # Aloha, William!
    # …

Or even in an asynchroneous way, with two SQL queries running in parallel using aiosqlite and asyncio:

import asyncio
import aiosql
import aiosqlite

queries = aiosql.from_path("greetings.sql", "aiosqlite")

async def main():
    async with aiosqlite.connect("greetings.db") as conn:
        # Parallel queries!
        greetings, user = await asyncio.gather(
            queries.get_all_greetings(conn),
            queries.get_user_by_username(conn, username="willvaughn")
        )

        for _, greeting in greetings:
            print(f"{greeting}, {user[2]}!")

asyncio.run(main())

It may seem inconvenient to provide a connection on each call. You may have a look at the AnoDB DB class which wraps both a database connection and query functions in one connection-like extended object, including managing automatic reconnection if needed.

Why you might want to use this

  • You think SQL is pretty good, and writing SQL is an important part of your applications.
  • You don't want to write your SQL in strings intermixed with your python code.
  • You're not using an ORM like SQLAlchemy or Django , with large (100k lines) code imprints vs about 800 for aiosql, and you don't need to.
  • You want to be able to reuse your SQL in other contexts. Loading it into psql or other database tools.

Why you might NOT want to use this

  • You're looking for an ORM.
  • You aren't comfortable writing SQL code.
  • You don't have anything in your application that requires complicated SQL beyond basic CRUD operations.
  • Dynamically loaded objects built at runtime really bother you.

aiosql's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiosql's Issues

Single character parameters cause parsing issues

Python 3.10
aiosql 3.4
asyncpg 0.25.0

-- name: insert_into_symbol_arima_models*!
INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES (:symbol, :interval, :p, :d, :q, :P, :D, :Q, :s, :k, :n_exog);
sql = 'INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10$11og);'

$10$11og ???

OFFSET keyword seems to be ignored

aiosql==3.4.1
asyncpg==0.25.0

Using aiosql, a query that uses OFFSET like

-- name: get-courses
SELECT unitid
    , year
    , semester
    , unitcode
    , descr
    , estenrol
    , homepage
    , area
    , start_date
FROM unit_view
ORDER BY year, semester, unitcode
LIMIT :limit
OFFSET :offset;

the offset seems to be completely ignored.
Using psql 14.2 and python 3.9.

Type annotations not available to users

aiosql does not make it's type annotations available to packages that import it. This causes mypy to complain about missing library stubs.

Here is a minimal way to reproduce:

$ mkdir reproduce && cd reproduce
$ python -m venv .env
$ source .env/bin/activate
$ cat > requirements.txt <<EOF
aiosql==3.2.0
mypy
EOF
$ pip install -r requirements.txt
$ cat > test.py <<EOF
import aiosql
EOF
$ mypy --strict test.py
test.py:1: error: Skipping analyzing 'aiosql': found module but no type hints or library stubs
test.py:1: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports
Found 1 error in 1 file (checked 1 source file)

This can be fixed by ensuring a py.typed file is available in the final environment:

# Continuing from above (replace with your Python version)
$ touch .env/lib/python3.9/site-packages/aiosql/py.typed
$ mypy --strict test.py
Success: no issues found in 1 source file

As a workaround, users can add a type: ignore comment on the import aiosql lines. However this doesn't seem ideal since aiosql does seem to include type annotations for most things. It would be nice if users could benefit from them without needing to create their own stubs.

I am not too familiar with Poetry, so I'm not sure whether there is an easy way to include this in wheels, etc. I would be happy to investigate / send a PR though. Let me know!

aiosql psycopg2 adapter is not escaping '%' in queries for psycopg2 driver

SQL queries loaded for driver psycopg2 do not escape the character %

Documentation of psycopg2 states that in order to use the operator % in SQL queries, this must be escaped adding another percentage symbol:

%      <- not escaped
%%     <- correctly escaped

image

You can test this behavior using the following code:

import aiosql
import psycopg2

conn = psycopg2.connect("host=... port=5432 dbname=... user=... password=...")
stmts = """
-- name: get-module
-- This query obtains the module.
SELECT 5 % :num;
"""

queries = aiosql.from_str(stmts , "psycopg2")
module = queries.get_module(conn, num=2)
print(module)

This produces the following exception:

Traceback (most recent call last):
  File ".../qloader.py", line 74, in <module>
    module = queries.get_module(conn, num=2)
  File ".../venv/lib/python3.9/site-packages/aiosql/queries.py", line 63, in fn
    return self.driver_adapter.select(
  File ".../venv/lib/python3.9/site-packages/aiosql/adapters/psycopg2.py", line 24, in select
    cur.execute(sql, parameters)
TypeError: dict is not a sequence

That exception is the same that the one that occurs when executing a query with psycopg2 without escaping the % character:

import psycopg2

conn = psycopg2.connect("host=... port=5432 dbname=... user=... password=...")
cur = conn.cursor()
module = cur.execute("SELECT 5 % %(num)s", {"num": 2}).fetch_one()
print(module)
cur.close()
conn.close()

This produces the following exception:

Traceback (most recent call last):
  File ".../qloader.py", line 80, in <module>
    module = cur.execute("SELECT 5 % %(num)s", {"num": 2}).fetch_one()
TypeError: dict is not a sequence

Could I drop support for python 3.6?

#29 This came up, dataclasses aren't usable in this library until we support 3.7+

Worse is the aioctxlib.py hack I wrote to get around not having this contextlib.asynccontextmanager in < 3.7

I think this library being a target for users of asyncio and modern versions of python gives me a pretty good reason to not support 3.6 if it's becoming inconvenient. Asyncio itself went through major changes from 3.6 to 3.7 and people should update asap for that.

Driver adapter for pymssql

I wanted to implement a driver adapter for pymssql. I cloned the repository and did the following:

  1. Copied psycopg2.py and renamed it as pymssql.py
  2. Then added it to adapters dict in aiosql.py
_ADAPTERS: Dict[str, Callable[..., DriverAdapterProtocol]] = {
    "aiosqlite": AioSQLiteAdapter,
    "asyncpg": AsyncPGAdapter,
    "psycopg2": PsycoPG2Adapter,
    "sqlite3": SQLite3DriverAdapter,
    "pymssql": PymssqlAdapter,
}
  1. Build the project as a wheel
  2. Installed it in a project environment

It simply worked what am I missing? Is it so simple to build adapters?

MaybeAcquire changes asyncpg.Connection state (such as custom codecs)

Let's say I want asynpg.Connection to automatically use the default python json serializer/deserialzer.

I create a connection then call:

# Add default codec for postgres json and jsonb types
await conn.set_type_codec(
    'json',
    encoder=json.dumps,
    decoder=json.loads,
    schema='pg_catalog'
)
await conn.set_type_codec(
    'jsonb',
    encoder=json.dumps,
    decoder=json.loads,
    schema='pg_catalog'
)

If i then use AsyncPGAdapter.select_one, json values are returned as str instead of Dict. Re-set the codecs after async with MaybeAcquire(conn) as connection: seems to fix it.

    async def select_one(self, conn, query_name, sql, parameters, record_class=None):
        parameters = self.maybe_order_params(query_name, parameters)
        async with MaybeAcquire(conn) as connection:
            import json
            await connection.set_type_codec(
                'json',
                encoder=json.dumps,
                decoder=json.loads,
                schema='pg_catalog'
            )
            await connection.set_type_codec(
                'jsonb',
                encoder=json.dumps,
                decoder=json.loads,
                schema='pg_catalog'
            )
            # json value are returned or passed to record_class() as Dict instead of str
            result = await connection.fetchrow(sql, *parameters)
            if result is not None and record_class is not None:
                # import ipdb; ipdb.set_trace()
                result = record_class(**dict(result))
        return result

Templating Identifers into SQL

honza/anosql#45 and honza/anosql#47

Are both asking for support for Identifiers in order to template other values into the SQL, like table names. I've never had need for this, and actually wasn't aware of it as a feature in psycopg2, but it's worth some thought to see if it's something we can do in this project.

One potential problem in the way of doing this is that the other driver libraries like sqlite or asyncpg may not have safe features for doing this. I just really don't know since I've never had to do this kind of table name formatting.

Difference between aiosql and anosql?

Hi, I just noticed the deprecation of anosql and, having not used it or aoisql yet, I was wondering what the differences are between the two libraries, if any?

How should I use `::jsonb` in aiosql's `.sql` file?

Hello, I want to use the jsonb type of postgresql, how should my SQL file be written?
Below is the wrong SQL statement I tried

-- name: add-oplog<!
INSERT INTO operation_log (op, path, new, old, owner, creator)
VALUES (:op, :path, ':new'::jsonb, ':old'::jsonb, :owner, :creator)
RETURNING id, created, updated;

-- or:
-- name: add-oplog<!
INSERT INTO operation_log (op, path, new, old, owner, creator)
VALUES (:op, :path, :new::jsonb, :old::jsonb, :owner, :creator)
RETURNING id, created, updated;

-- or:
-- name: add-oplog<!
INSERT INTO operation_log (op, path, new, old, owner, creator)
VALUES (:op, :path, :new\:\:jsonb, :old\:\:jsonb, :owner, :creator)
RETURNING id, created, updated;

thanks.

column names?

Hi! Is there any way to get the column names from a query? I browsed the docs, but didn't seem to find anything...

Trailing lines in SQL causes a parse exception.

Python 3.8.5 (default, Jul 27 2020, 08:42:51)
[GCC 10.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import aiosql
>>> import psycopg2
>>> from psycopg2.extras import execute_values
>>> sql_str = """
... -- name: create_schema#
... create table test (id int primary key, v1 int, v2 int);
...
... -- name: insert!
... INSERT INTO test (id, v1, v2) VALUES %s;
...
... -- name: update!
... UPDATE test SET v1 = data.v1 FROM (VALUES %s) AS data (id, v1)
... WHERE test.id = data.id;
...
... -- name: getem
... select * from test order by id;
... """
>>>
>>> queries = aiosql.from_str(sql_str, "psycopg2")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/nackjicholson/repos/aiosql/aiosql/aiosql.py", line 83, in from_str
    query_data = query_loader.load_query_data_from_sql(sql)
  File "/home/nackjicholson/repos/aiosql/aiosql/query_loader.py", line 74, in load_query_data_from_sql
    query_data.append(self._make_query_datum(query_sql_str))
  File "/home/nackjicholson/repos/aiosql/aiosql/query_loader.py", line 43, in _make_query_datum
    raise SQLParseException(
aiosql.exceptions.SQLParseException: name must convert to valid python variable, got "create_schema#                                                   ".

Kind of hard to tell from that. But the way I copy pasted the lines in created trailing space on the end of the lines that messes up our code. There probably needs to be a regex change, or maybe a .strip() call somewhere in query_loader.py

Allow comments in SQL files

It is really convenient to allow misc SQL comments in the SQL file loaded. With something like:

    -- SQL queries...

    -- name: one
    SELECT 1;

I got aiosql.exceptions.SQLParseException: name must convert to valid python variable, got "__ SQL queries".

return cursor.rowcount from dml methods

I miss a feature to have a method return the cursor.rowcount of the dml executed inside the method

For instance

SQL

-- name: bulk_ins!
insert into big_t (a, b, c, d)
select a, b, c, d
from other_big_t;

Python

count = queries.bulk_ins(conn)

I can work around it using the .sql methods, but this would be much more elegant.

Thank you

Niels Jespersen

Off-by-one on AsyncPGDriverAdapter.var_replacements

aiosql version: 3.2.0
Python version: 3.8

Description

First of all, thanks for maintaining such an excellent library. I've been itching for a chance to use it for a while now and finally have the opportunity.

I've come across an issue where if there are multiple child query clients with similar parameters, the AsyncPGDriverAdapter will not properly count them when processing the SQL file.

The issue appears to be isolated to this block:

var_name = gd["var_name"]
if var_name in self.var_replacements[query_name]:
replacement = f"${self.var_replacements[query_name][var_name]}"
else:
count += 1
replacement = f"${count}"
self.var_replacements[query_name][var_name] = count

If there are multiple queries with the same name which share an argument with the same name, the count will not be incremented when that var name is replaced.

There are two issues with this:

  1. All queries with the same name must implement matching arguments in exactly the same order. (Easily worked with)
  2. Subsequent queries' additional arguments will all be off-by-one. (Breaks this use-case)

Proposed solution

I think the simplest solution would be to namespace the keys in var_replacements with the query source as well as name. This would prevent argument collision from occurring at all.

Current Workaround

My current workaround is to load each query directory individually. This works for my use-case, but is still a bit of a gotcha.

How to connect to MySQL or MariaDB?

Hello,
What is the way to connect to MySQL? I tried this way but I got an error.

import aiosql
import pymysql

connection = pymysql.connect(
    host="data-mysql-rds.example.com",
    user="root",
    password="******",
    db="db",
    charset="utf8",
)


queries = aiosql.from_path("./sql", "pymysql")
# ValueError: Encountered unregistered driver_adapter: pymysql
queries = aiosql.from_path("./sql", connection)
# TypeError: 'Connection' object is not callable

KeyError if i put not needed parameters

  File "/mnt/data/Projects/2021/proj/proj/web.py", line 59, in phone
    called = await request.app.queries.called_get(request.app.db, **session)
  File "/home/eri/.local/lib/python3.9/site-packages/aiosql/queries.py", line 77, in afn
    return await fn(self, conn, *args, **kwargs)
  File "/home/eri/.local/lib/python3.9/site-packages/aiosql/adapters/asyncpg.py", line 82, in select_one
    parameters = self.maybe_order_params(query_name, parameters)
  File "/home/eri/.local/lib/python3.9/site-packages/aiosql/adapters/asyncpg.py", line 65, in maybe_order_params
    xs = [(self.var_replacements[query_name][k], v) for k, v in parameters.items()]
  File "/home/eri/.local/lib/python3.9/site-packages/aiosql/adapters/asyncpg.py", line 65, in <listcomp>
    xs = [(self.var_replacements[query_name][k], v) for k, v in parameters.items()]
KeyError: 'mac'

sql

-- name: called_get^
select * from calleds where called=:called;

Is this sort needed?

does asyncpg supports https://legacy.python.org/dev/peps/pep-0249/#paramstyle ?

[QUESTION] SQL Statements with Dynamic Filtering

First off sorry if this is covered somewhere. I check the docs, through the source and couldn't find anything that stood out.

How are you guys handling a sql query where you can filter on any, all or no columns in the table? Are you writing multiple sql queries of all variation?

For example you have a Users table with id, username, first_name, and last_name. Now you want to query that table by filtering on any, none or all of the other columns so you end up with:

Select * from users;
select * from users where id = :id
select * from users where id = id or first_name like :first_name
select * from users where id = :id or first_name like :first_name
select * from users where id = :id or first_name like :first_name or last_name like :last_name 

How do you achieve this without writing endless queries?

I am trying to implement a grid that allows me to filter the data.

Cant run test suite in dev mode without a local postgres instance

I found it annoying to have to install postgres on my dev machine just to run the tests, most of the time I'm using a throwaway docker container to do that.
The following PR #15 is an attempt at solving it.
For those who use a local instance it basically changes nothing, just need to run POSTGRES_DSN=postrgres://.... pytest instead of pytest and or those who don't have one they can chose to point the new var to wherever the db is.

Feature Ideas

Convenient way to declare that a query should select a single row.

I have to do this quite often.

with queries.get_user_cursor(conn, user_id=1) as cur:
     user = cur.fetchone()
     if user is None:
         raise NotFound()

Maybe use the ^ as a aiosql operator to indicate a single response query.

Some lightweight integration with python dataclasses.

At the risk of turning this into a bit of an ORM. Would also introduce a dependency on a strictly python 3.7 feature.

@dataclass
class User:
    user_id: int
    email: str
    name: str
-- name: get_user^
-- dataclass: User
select user_id,
       email,
       name
  from users 
 where user_id = :user_id;
user = queries.get_user(conn, user_id=1)
assert user == User(user_id=1, email="[email protected]", name="Nack")

EDIT: I ended up doing this in a way that isn't strictly python3.7 only. It would let you build a named tuple, or any other class that takes keyword params.

Something like that would be cool. I don't exactly know how to make aiosql automatically build those. Maybe a aiosql.register_dataclass function?

aiosql.register_dataclass("User", User)
# Or should that be on the Queries class? I'm not sure right now.
queries.register_dataclass(User)

EDIT: Entirely removed register concept for database drivers. This would be passed in as a mapping to the from_str or from_path function.

Be able to provide a type hint for the resulting query class.

The queries object is dynamic and has no chance of being autocompleted.

One way users can do this now without any aiosql change.

class QInterface:
    def get_user(self, conn, user_id: int) -> User:
        pass

queries: QInterface = aiosql.from_dir("...")

I wonder if there is a way to build this fake-o class automatically while we're dynamically building the real Queries object, and then with that type hint the returned object. EDIT: don't think so, but maybe adding some documentation that shows an example of how to do the above would be nice.

Creating database and tables ?

I have been meaning to give aiosql a try, the use of sql files, along with the mapping to dataclasses as domain objects, make it attractive for my usecase.

However not having much of a SQL background, and just wanting to "store my python dataclasses into a db for recovery later on", I miss the "database/tables creation part" in aiosql...

Maybe it is just a matter of documentation more than actual features missing ?
Any Hint about that ?

Thanks a lot !

Incomplete documentation

Hi!

Tag: Question

This is precisely the way I would like to work with db's from my python code, and I would love to use this library! However, is it in any way being maintained by the author or anyone else? There are more than one "To do"-section in the documentation, but the last commit was almost a year ago? Thanks!

how to pass tuple for WHERE IN Clause

Hi ,
I am implementing FASTAPI and we have to use WHERE IN Clause . So want to pass tuple as parameter to .sql file.
but it is not accepting tuple.

I have a .sql file and using

--name: count_participated_tickets
select count(ticket_name) from TABLE where ticket_namein :ticket_name;

queries = aiosql.from_path(pathlib.Path(__file__).parent / "sql", "asyncpg")

Please suggest.

Quicker named parameters

For named parameter queries like this:

-- name: bulk-publish*!
-- Insert many blogs at once
insert into blogs (
    userid,
    title,
    content,
    published
)
values (
    :userid,
    :title,
    :content,
    :published
)

It seems redundant to me to have to type the columns out twice. Perhaps there's a way to automatically fill the values to the default column names when loading the queries?

Postgres -> INSERT INTO

I've run into an interesting issue. When I attempt to do a INSERT INTO using Postgres, it does not save the actual record, however it does increase the sequence by one. When I do a RETURNING on the id, I see the new id, and that matches with the sequences on the actual database. However, the record is never saved.

UPDATE and SELECT work fine, so I know the connection to the database is working.

Any idea on what might be the issue? Is there a way I could see any errors that are occurring in the aiosql that are not displayed automatically?

Other than this issue (which is obviously important) this is an amazing addition to my project, so thank you for all the work you've done on it.

UPDATE....
I've done some more testing and something even stranger is occurring. Does the code create a temporary DB in memory? What is happening is that once I create the new record, I am able to do a SELECT in my web application and the record displays. (Using PGAdmin, I can not see the record in the DB.) However, when I stop the app and start it again, the record is gone. The above still happens, while the record is not saved, the sequence number does increment by one in the DB.

pip vs poetry

ISTM that it would be convenient to make pip installations work instead of requiring poetry, which would suggest to provide a basic setup.py file, which is probably needed anyway for pypi.

Is there a special reason not to provide that file in the repos?

Alternatives to `record_classes`?

After reading some of @nackjicholson 's comments in a few issues about how using record_classes was a bad idea to begin with (and is now deprecated), I'm curious as to what people are doing instead for converting raw records into other objects? For reference, my use cases mainly involve converting records into Pydantic models or Pandas dataframes/series.

`ValueError: duplicate parameter name` when the same parameter name appears twice

aiosql 3.4.0 cannot load SQL queries when the same parameter :name appears multiple time. It did work with aiosql 3.3.1.

This can be reproduced with this example:

# aiosql_test.py
import aiosql

sql_str = "-- name: get^\n" "select * from test where foo=:foo and foo=:foo"
aiosql.from_str(sql_str, "aiosqlite")
print("OK")

Working with aiosql 3.3.1:

$ pip install "aiosql<3.4.0"
$ python3 aiosql_test.py
OK

Failing with aiosql 3.4.0:

$ pip install "aiosql==3.4.0"
$ python3 aiosql_test.py
Traceback (most recent call last):
  File "aiosql_test.py", line 4, in <module>
    aiosql.from_str(sql_str, "aiosqlite")
  File "lib/python3.9/site-packages/aiosql/aiosql.py", line 84, in from_str
    query_data = query_loader.load_query_data_from_sql(sql)
  File "lib/python3.9/site-packages/aiosql/query_loader.py", line 117, in load_query_data_from_sql
    query_data.append(self._make_query_datum(query_sql_str, ns_parts))
  File "lib/python3.9/site-packages/aiosql/query_loader.py", line 27, in _make_query_datum
    signature = self._extract_signature(sql)
  File "lib/python3.9/site-packages/aiosql/query_loader.py", line 103, in _extract_signature
    return inspect.Signature(parameters=[self] + params) if params else None
  File "lib/python3.9/inspect.py", line 2826, in __init__
    raise ValueError(msg)
ValueError: duplicate parameter name: 'foo'

asyncpg execute return is not passed

greetings, amazing library I'm discovering so bear with me maybe I'm misusing it.

should I use asyncpg directly this way, the response will be CREATE DATABASE

dbpool = await get_db_pool(database_subscriber_dsn)
# response = await create_queries.create_db_test(dbpool)
response = await dbpool.execute("create database postgres_test;")

now should I try it with aiosql with the following sql file it returns None

-- name: create_db_test#
create database postgres_test;

happy to send a little PR

Add support for 'fetchval' to return a single value.

Asyncpg has a fetchval method that can be used to return a column from a single row. This is useful for queries such as getting the count.

e.g.

-- name: count_rows^
SELECT COUNT(*) FROM rows WHERE param = :value

requires the one of the python calls to get the value

count, = queries.count_rows(value=1)
count = queries.count_rows(value=1)[0]

Whereas, if a notation for fetchval was supported only the following would be required:

count = queries.count_rows(value=1)

table remain empty after succeesfull insert

please, i have been trying to insert into data into my database but after it successfully inserts the select returns zero rows but it works fine when i do the samthing with my vscode sql tool .

Support IN

Is it possible to write a query that contains WHERE some_label IN (:labels) and pass a list of string as labels?

Another question:
Is it possible to have an optional parameter?
For example,
SELECT * FROM cars WHERE brand = :brand AND year=:year;
If the "year" variable is not passed, then it should become
SELECT * FROM cars WHERE brand = :brand;

Multiple statements don't work

-- name: up#
CREATE TABLE foo
(
	id VARCHAR(255) PRIMARY KEY
);

CREATE TABLE bar
(
	id VARCHAR(255) PRIMARY KEY
);

pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CREATE TABLE bar\n(\nid VARCHAR(255) PRIMARY KEY\n)' at line 6")

This only works if I make the second CREATE TABLE its own method.

The only caveat is that I'm using a custom DB adapter for (synchronous) MySQL which is pretty much a copy-paste of the psycopg2 adapter you have here with conn.commit() added to the insert methods.

Typing Extensions >= 4?

It seems that there is a requirement for typing_extensions<4 which breaks pkg_resources if typing_extensions >= 4 is installed for some other packages. Current version is 4.0.1.

Any plan to remove this constraint or switch to the newer one?

Python is a moving target, and core packages do not aim at backward compatibility 😢

$ not working on aiosqlite

Query

-- name: check_auth$
SELECT id from auth WHERE username=:username and password=:password;

Exception

  File "/home/eri/.local/lib/python3.9/site-packages/aiosql/adapters/aiosqlite.py", line 44, in select_value
    return result[0] if result else None

result is dict type, set up as

    def dict_factory(cursor, row):
        d = {}
        for idx, col in enumerate(cursor.description):
            d[col[0]] = row[idx]
        return d

Add support for writing from dataclasses.

Currently, there is support for reading database rows into dataclasses which significantly reduces the amount of boilerplate required. But the inverse is not the case.

e.g.

@dataclass
class Article:
  title: str
  body: str
  created: datetime

  def create(self, conn):
    return queries.insert_article(conn, title=self.title, body=self.body, created=self.created)
  
  def update(self, conn):
    return queries.update_article(conn, title=self.title, body=self.body, created=self.created)

If the parameters supported dot notation, this could be simplified to

-- name: update_article
UPDATE article SET title=:article.title body=:article.body ...
def update(self, conn):
  return queries.update_article(conn, article=self)

aiosql-mysql: sync and async mysql driver adaptor library

Hello,

I think this library is very neat and I wanted to provide async and sync MySQL adaptors.

After reading through issues #24, #44 I have created: aiosql-mysql.

This is still pretty rough, the sync version with PyMySQL is working but the async version is not. After I have a working async function I will publish this to PyPI and add installation instructions.

I am interested in any feedback.

How to pass parameters to SQL in

How would one go about passing parameters to a SQL in operator?

--name: select_max_datetime_by_symbols_interval
SELECT
    symbol
    ,MAX("datetime")
FROM
    prices
WHERE
    symbol in (:symbols) AND "interval" = :interval
GROUP BY symbol;

Like this?

symbol_str = ",".join(f"'{s}'" for s in symbols)  # tuple(symbols)
query_output = await db.queries.select_max_datetime_by_symbols_interval(db.pool, symbols=symbol_str, interval=interval)

Doesn't seem to work...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.