tiangolo / pydantic-sqlalchemy Goto Github PK
View Code? Open in Web Editor NEWTools to convert SQLAlchemy models to Pydantic models
License: MIT License
Tools to convert SQLAlchemy models to Pydantic models
License: MIT License
Using sqlalchemy_to_pydantic
helped me a lot, but now I need the other way around. Is it supposed to be part of this library?
Using SQLModel
is no proper alternative to me, because it makes things harder, not easier right now.
While testing your project I ran into this issue. Not sure where the column.type.python_test exists because when I was checking the sqlalchemy table declaration there was no reference to it.
Traceback (most recent call last):
File "app/db/tables.py", line 31, in <module>
PydanticTest = sqlalchemy_to_pydantic(Test)
File "/home/vagrant/venv/project/lib/python3.8/site-packages/pydantic_sqlalchemy/main.py", line 21, in sqlalchemy_to_pydantic
python_type = column.type.python_type
File "/home/vagrant/venv/project/lib/python3.8/site-packages/sqlalchemy/sql/type_api.py", line 409, in python_type
raise NotImplementedError()
NotImplementedError
Hi there,
thanks for all your efforts in getting this model translation to work! Really appreciated.
Upon a quick search (there really seems to be an entire zoo of possible orm's + pydantic + graphene models etc. etc.) I came upon this:
https://github.com/kolypto/py-sa2schema
From the descriptions and listed examples it seems they are trying to achieve the same thing.
Now the hard question: which one should I use?
Any opinions on pro- & cons?
Thanks,
Carsten
Hello, thanks for the wonderful open source work ๐
Closing this, as I made a mistake.
For the create method of SQL model I need to exclude the id of a model, I'd like to call the sqlalchemy_to_pydantic
function twice on the same model for example like this:
PyUser = sqlalchemy_to_pydantic(User)
PyUserCreate = sqlalchemy_to_pydantic(User, exclude=['id'])
Now the server and every thing starts just fine but when I try to fetch the openapi.json, I get a keyerror because the first model is not in the model map.
I think line 36 in main.py is at fault
pydantic_model = create_model(
db_model.__name__, __config__=config, **fields # type: ignore
)
The db_model.__name__
is the same two times so it must be overwriting some key (Disclaimer: I have no idea about the inner workings of pydantic but it seems to be creating some kind of global map of all models?)
I suggest to add an optional parameter name
to the function and replace that line.
MWE: (main.py
)
from fastapi import FastAPI
from pydantic_sqlalchemy import sqlalchemy_to_pydantic
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session, sessionmaker
Base = declarative_base()
engine = create_engine("sqlite://", echo=True)
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
name = Column(String)
PydanticUser1 = sqlalchemy_to_pydantic(User)
PydanticUser2 = sqlalchemy_to_pydantic(User, exclude=["id"])
Base.metadata.create_all(engine)
LocalSession = sessionmaker(bind=engine)
db: Session = LocalSession()
app = FastAPI()
@app.get("/user1", response_model=PydanticUser1)
def get_user1():
return {"Hello": "World"}
@app.get("/user2", response_model=PydanticUser2)
def get_user2():
return {"Hello": "World"}
Output: (uvicorn main:app --reload
)
IINFO: Started server process [149476]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:43332 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:43332 - "GET /openapi.json HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "./venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "./venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "./venv/lib/python3.9/site-packages/fastapi/applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "./venv/lib/python3.9/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "./venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "./venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "./venv/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "./venv/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "./venv/lib/python3.9/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "./venv/lib/python3.9/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "./venv/lib/python3.9/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "./venv/lib/python3.9/site-packages/fastapi/applications.py", line 152, in openapi
return JSONResponse(self.openapi())
File "./venv/lib/python3.9/site-packages/fastapi/applications.py", line 130, in openapi
self.openapi_schema = get_openapi(
File "./venv/lib/python3.9/site-packages/fastapi/openapi/utils.py", line 354, in get_openapi
definitions = get_model_definitions(
File "./venv/lib/python3.9/site-packages/fastapi/utils.py", line 28, in get_model_definitions
model_name = model_name_map[model]
KeyError: <class 'pydantic.main.User'>
Currently the conversion inspects the impl
attribute of the column's type to derive the field type for Pydantic, which may be a different Python type from what the SQLAlchemy model actually uses (via process_bind_param
, process_result_value
, etc).
For example, I have this custom, BLOB-backed UUID type, which I use with SQLite:
import uuid
from sqlalchemy.types import BLOB, TypeDecorator
class UUID(TypeDecorator):
impl = BLOB
def load_dialect_impl(self, dialect):
return dialect.type_descriptor(BLOB(16))
def process_bind_param(self, value, dialect):
if value is None:
return value
if not isinstance(value, uuid.UUID):
value = uuid.UUID(value)
return value.bytes
def process_result_value(self, value, dialect):
if value is None:
return value
if isinstance(value, bytes):
return uuid.UUID(bytes=value)
if isinstance(value, uuid.UUID):
return value
raise TypeError(type(value))
This correctly gets me an uuid.UUID
instance in and out of the DB, but the corresponding Pydantic model uses a bytes
type.
It seems SQLAlchemy doesn't have a mechanism to directly specify the mapped Python type, otherwise we would be able to write something like this:
...
class UUID(TypeDecorator):
impl = BLOB
python_type = uuid.UUID
...
And then adapt the logic in sqlalchemy_to_pydantic
:
if hasattr(column.type, "python_type"):
python_type = column.type.python_type
elif hasattr(column.type, "impl"):
if hasattr(column.type.impl, "python_type"):
python_type = column.type.impl.python_type
elif hasattr(column.type, "python_type"):
python_type = column.type.python_type
It would also allow monkey-patching any existing or third-party column types (also see #6).
(Or perhaps the more correct approach would be to create a custom impl
? I'm not sure, seems that's why we have process_bind_param
, process_result_value
, copy
, etc)
I used a TypeDecorator
SQLAlchemy class (see uuid_type.py below) to create a handler of python uuid.UUID
for SQLAlchemy, inspired by sqlalchemy_utils.types.uuid.UUIDType and providing a dialect agnostic using load_dialect_impl()
associated with impl = sqlalchemy.types.TypeEngine
.
Extract of documentation of TypeDecorator
:
The class-level impl attribute is required, and can reference any TypeEngine class. Alternatively, the load_dialect_impl() method can be used to provide different type classes based on the dialect given; in this case, the impl variable can reference TypeEngine as a placeholder.
Then, I called a SQLAlchemy base model using the type decorator.
uuid_type.py
import uuid
from sqlalchemy import types, util
from sqlalchemy.dialects import mssql, postgresql
class UUIDType(types.TypeDecorator):
"""
Stores a UUID in the database natively when it can and falls back to
a BINARY(16) or a CHAR(32) when it can't.
"""
impl = types.TypeEngine
python_type = uuid.UUID
cache_ok = True
def __init__(self, binary=True, native=True):
"""
:param binary: Whether to use a BINARY(16) or CHAR(32) fallback.
"""
self.binary = binary
self.native = native
def __repr__(self):
return util.generic_repr(self)
def load_dialect_impl(self, dialect):
if self.native and dialect.name in ('postgresql', 'cockroachdb'):
# Use the native UUID type.
return dialect.type_descriptor(postgresql.UUID())
if dialect.name == 'mssql' and self.native:
# Use the native UNIQUEIDENTIFIER type.
return dialect.type_descriptor(mssql.UNIQUEIDENTIFIER())
else:
# Fallback to either a BINARY or a CHAR.
kind = types.BINARY(16) if self.binary else types.CHAR(32)
return dialect.type_descriptor(kind)
@staticmethod
def _coerce(value):
if value and not isinstance(value, uuid.UUID):
try:
value = uuid.UUID(value)
except (TypeError, ValueError):
value = uuid.UUID(bytes=value)
return value
def process_literal_param(self, value, dialect):
return "'{}'".format(value) if value else value
def process_bind_param(self, value, dialect):
if value is None:
return value
if not isinstance(value, uuid.UUID):
value = self._coerce(value)
if self.native and dialect.name in (
'postgresql',
'mssql',
'cockroachdb'
):
return str(value)
return value.bytes if self.binary else value.hex
def process_result_value(self, value, dialect):
if value is None:
return value
if self.native and dialect.name in (
'postgresql',
'mssql',
'cockroachdb'
):
if isinstance(value, uuid.UUID):
# Some drivers convert PostgreSQL's uuid values to
# Python's uuid.UUID objects by themselves
return value
return uuid.UUID(value)
return uuid.UUID(bytes=value) if self.binary else uuid.UUID(value)
model.py
from sqlalchemy import Column
from sqlalchemy.orm import declarative_base
from pydantic_sqlalchemy import sqlalchemy_to_pydantic
from .uuid_type import UUIDType
Base = declarative_base()
class ExampleMapper(Base):
uuid = Column(UUIDType)
ExampleModel = sqlalchemy_to_pydantic(ExampleMapper)
Running model.py raises a RuntimeError
as sqlalchemy_to_pydantic
attempt to infer type of decorator from impl
(whose value is TypeGeneric
and associated impl
is a property) instead of python_type
(which is correctly set).
The following patch proposal handle this specific case and fallback into python_type
condition when TypeEngine
is used as impl
.
5a6
> from sqlalchemy import types
25c26,28
< if hasattr(column.type, "impl"):
---
> # TypeEngine is a placeholder when impl is abstract
> if (hasattr(column.type, "impl")
> and column.type.impl is not types.TypeEngine):
looking at the script, I would expect orm_mode to be turned on by default, but when using the model I am getting the usual "AttributeError: 'dict' object has no attribute 'email'"?
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True, autoincrement=True, unique=True)
email = Column(String, index=True, unique=True)
organization_id = Column(Integer, ForeignKey("organizations.id"), index=True, nullable=True)
role = Column(ENUM(Role), nullable=False, default=Role.ADMIN)
UserPD = sqlalchemy_to_pydantic(User)
[...]
@app.get("/users")
def getUsers(db: Session = Depends(get_db)): -> List[UserPD]
users = db.query(User).all()
users = [UserPD.from_orm(u) for u in users]
# this one would fail
print(users[0].email)
# this one would work
print(users[0]["email"]
I think that the depth is a useful feature when serializing the model has many relationships like the DRF serializer's depth option
Other libraries that it has depended on Pydantic
sqlacodegen is a tool which reads the structure of an existing database and generates the appropriate SQLAlchemy model code:
https://github.com/agronholm/sqlacodegen
So if pyantic-sqlalchemy could work with this pacakge - it could be really AWESOME!
as it will reduce the need of predefine the sqlachamy model as it will be auto generate by sqlacodegen .
I tried playing with it and somehow combine it with pyantic-sqlalchemy
and ... it didn't work:(
I call the script and generate below models:
/usr/local/bin/sqlacodegen --outfile models.py postgresql://admin:[email protected]:5432/mydata
# coding: utf-8
from sqlalchemy import Boolean, Column, Date, DateTime, ForeignKey, Integer, Numeric, SmallInteger, String, text
from sqlalchemy.orm import relationship
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
metadata = Base.metadata
class Addres(Base):
__tablename__ = 'address'
address_id = Column(Integer, primary_key=True, server_default=text("nextval('address_address_id_seq'::regclass)"))
address = Column(String(50), nullable=False)
address2 = Column(String(50))
district = Column(String(20), nullable=False)
city_id = Column(SmallInteger, nullable=False, index=True)
postal_code = Column(String(10))
phone = Column(String(20), nullable=False)
last_update = Column(DateTime, nullable=False, server_default=text("now()"))
class Customer(Base):
__tablename__ = 'customer'
customer_id = Column(Integer, primary_key=True, server_default=text("nextval('customer_customer_id_seq'::regclass)"))
store_id = Column(SmallInteger, nullable=False, index=True)
first_name = Column(String(45), nullable=False)
last_name = Column(String(45), nullable=False, index=True)
email = Column(String(50))
address_id = Column(ForeignKey('address.address_id', ondelete='RESTRICT', onupdate='CASCADE'), nullable=False, index=True)
activebool = Column(Boolean, nullable=False, server_default=text("true"))
create_date = Column(Date, nullable=False, server_default=text("('now'::text)::date"))
last_update = Column(DateTime, server_default=text("now()"))
active = Column(Integer)
address = relationship('Addres')
class Payment(Base):
__tablename__ = 'payment'
payment_id = Column(Integer, primary_key=True, server_default=text("nextval('payment_payment_id_seq'::regclass)"))
customer_id = Column(ForeignKey('customer.customer_id', ondelete='RESTRICT', onupdate='CASCADE'), nullable=False, index=True)
staff_id = Column(SmallInteger, nullable=False, index=True)
rental_id = Column(Integer, nullable=False, index=True)
amount = Column(Numeric(5, 2), nullable=False)
payment_date = Column(DateTime, nullable=False)
customer = relationship('Customer')
I tried play around with above model and have the ability
to generate the Pydamtic class --> read the data from DB .
By adding the sql2pydantic conversion:
PydanticCustomer = sqlalchemy_to_pydantic(Customer)
PydanticPaymet = sqlalchemy_to_pydantic(Payment)
PydanticAddress = sqlalchemy_to_pydantic(Addres)
class PydanticCustomerAll(PydanticCustomer):
payments: List[PydanticPaymet] = []
addresses: List[PydanticAddress] = []
#metadata = Base.metadata
Base.metadata.create_all(postgress_dvd_engine)
Fetching the data:
def demo():
customer = db.query(Customer).first()
pydanticcustomerall = PydanticCustomerAll.from_orm(customer)
data = pydanticcustomerall.dict()
return data
Json created but without the data of the nested objects (payments,addresses)
{
"customer_id": 524,
"store_id": 1,
"first_name": "Jared",
"last_name": "Ely",
"email": "[email protected]",
"address_id": 530,
"activebool": true,
"create_date": "2006-02-14",
"last_update": "2013-05-26T14:49:45.738000",
"active": 1,
**"payments": [],
"addresses": []**
}
What is missing here ? seems sqlacodegen generate accurate structure and relation .
It is really interesting to know if it possible to combine it with pydantic-sqlalchemy,
first, to make it work .
second , create the models on-the-fly (without the need for preparing a modely.py in advance.
Hi there!
I just wanted to know if there's interest in adding support for additional properties of Pydantic fields from SQLAlchemy's Column
. I would want that ๐
There's issue #7 that is related. Additionally, theres Column.doc
and Column.comment
that could populate Field.description
.
One example use case is to improve the automatic documentation when using FastAPI, since the description is used to describe schemas in OpenAPI spec.
Here's an example, taken from the default content in https://editor.swagger.io/:
That pet status in the store
is the description of the status
field.
We could decide what should be supported and how. I think I could come up with a PR if that's the case.
Hi, I had the same problem with maintaining two sets of models and took inspiration from encode/orm (not finished) and ormantic (not maintained) and started ormar that uses encode/databases and sqlalchemy core under the hood, and can be used in async mode. @tiangolo feel free to check it out and let me know if you find it useful!
I have created my model from pre-existing MySQL database everything working fine butt when I add DateTime: str gives error. str type expected (type=type_error.str)
error msg:
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.ext.asyncio import create_async_engine
from pydantic_sqlalchemy import sqlalchemy_to_pydantic
engine = create_async_engine('postgresql+asyncpg://postgres:[email protected]/postgres')
session = AsyncSession(engine)
stmt = select(Subjob).where(Subjob.id.in_([1, 2]))
result = await session.execute(stmt)
sj_model = sqlalchemy_to_pydantic(Subjob)
data = [sj_model.from_orm(orm) for orm in result.scalars()]
ValidationError Traceback (most recent call last)
<ipython-input-47-b50afa27801d> in <module>
----> 1 data = [sj_model.from_orm(orm) for orm in result.scalars()]
<ipython-input-47-b50afa27801d> in <listcomp>(.0)
----> 1 data = [sj_model.from_orm(orm) for orm in result.scalars()]
/usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.from_orm()
ValidationError: 1 validation error for Subjob
hosts
value is not a valid dict (type=type_error.dict)
my sqlalchemy model code of Subjob
...
class Subjob(PK, Job):
...
hosts = Column(JSONB, nullable=False)
...
the hosts
is a JSONB type column which will contains python data format like below:
[
{
"ip": "172.17.0.2",
"port": 5432,
"user": "postgres",
"password": "postgres"
},
{
"ip": "172.17.0.3",
"port": 5432,
"user": "postgres",
"password": "postgres"
}
]
the sj_model translate the JSONB as dict type in python which may cause this error.
In [49]: sj_model.__fields__['hosts']
Out[49]: ModelField(name='hosts', type=dict, required=True)
enable List and Dict may fix this
import peewee as pw
class Order(pw.Model):
project_id = pw.CharField(null=True, max_length=255, verbose_name="")
order_id = pw.CharField(null=True, max_length=255, index=True, verbose_name="")
pydanticA = sqlalchemy_to_pydantic(Order)
Hi Tiangolo
I was wondering if there was a way to generate a new py file from the pydantic-sqlalchemy converter. That would be a great feature to add that component to your script but I may have not figure it out yet.
The required input from the user is a nested json,
{'customer_no': 2, 'subscriber': [{'subscriber_no': 2, 'is_active': False}, {'subscriber_no': 1, 'is_active': False}]}
Expected:
sqlalachamy ORM will break this json and insert it to customers and subsciber tables .
where the relation one to many .
Issues:
How i can define the Base class so it will not expect the id column
but exactly as
{'customer_no': 2, 'subscriber': [{'subscriber_no': 2, 'is_active': False}, {'subscriber_no': 1, 'is_active': False}]}
i face issue defining the class when i remove fields as "id" (which not exists in the expected json),
seems those fileds are required but it also changing the expected data to:
{
"id": 0,
"customer_no": 0,
"subscriber": [
{
"customer_no": 0,
"subscriber_no": 0
}
]
}
Second ,what am i doing wrong ?
I expect to get 2 pydamtic class and one class which i need to define myself PydanticCustomerWithSubscriberes ,which describe the one--> many relation between customer--> subscriber block/tables.
then , isdie fastapi assign the input(json) to this calss ,
then generate ORM model which i use to push to the DB.
see:
added pydantic-sqlalchemy as part of fastapi,
Models:
class CustomerModel(Base):
__tablename__ = 'customer'
id = Column(Integer, primary_key=True, index=True)
customer_no= Column(Integer)
subscriber= relationship("SubscriberModel", back_populates="owner")
class SubscriberModel(Base):
__tablename__ = 'subscriber'
id = Column(Integer, primary_key=True, index=True)
customer_no= Column(Integer, ForeignKey("customer.id"))
subscriber_no= Column(Integer)
owner = relationship("CustomerModel", back_populates="subscriber")
added the Pydantic models and route:
PydanticCustomer = sqlalchemy_to_pydantic(CustomerModel)
PydanticSubscriber = sqlalchemy_to_pydantic(SubscriberModel)
class PydanticCustomerWithSubscriberes(PydanticCustomer):
subscriber: List[PydanticSubscriber] = None
@customer_router.post("/customer/")
def overloaded_create_customer(customer: PydanticCustomerWithSubscriberes, db: Session = Depends(get_db)):
db_customer = CustomerModel(**dict(customer))
db.add(db_customer)
db.commit()
db.refresh(db_customer)
return test
Getting error:
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 1675, in emit_backref_from_collection_append_event
child_state, child_dict = instance_state(child), instance_dict(child)
AttributeError: 'SubscriberModel' object has no attribute '_sa_instance_state'
hi there :)
PydanticUser = sqlalchemy_to_pydantic(User)
class PydanticUser2(sqlalchemy_to_pydantic(User)):
pass
print(PydanticUser, PydanticUser2, User)
the output is:
<class 'User'> <class '__main__.PydanticUser2'> <class '__main__.User'>
If we directly use sqlalchemy_to_pydantic
, actually we create another User
class in namespace, which may be confused.
Currently pinned to pydantic<2.0.0 but pydantic 2.0 has been out for a few months.
Similar to the exclude parameter of sqlalchemy_to_pydantic(), it would be useful to be alternately able to specify an include list of columns. This would be convenient when we have a model with say 30 columns of which only 10 need to be specified when creating a new item. This would be similar to the Marshmallow "only" option.
Hi, thanks for this function just as I started putting together one of my own. Regarding this code:
pydantic-sqlalchemy/pydantic_sqlalchemy/main.py
Lines 24 to 30 in 8667e21
Have you looked at the typing.get_type_hints() function? Perhaps
else:
python_types = typing.get_type_hints(db_model)
python_type = python_types[column.name]
works for you as well?
`
class User(db.Model):
tablename = 'user'
id = db.Column(db.Integer, autoincrement=True, primary_key=True)
first_name = db.Column(db.String(40), nullable=False)
last_name = db.Column(db.String(40), nullable=False)
UserSchema = sqlalchemy_to_pydantic(User, exclude=['id'])
UserSchema doesn't enforce the maximum length on the first_name and last_name fields.
`
How to generate Pydantic class definition?
so we can copy it,then we can manual edit genrated pydantic class
from pprint import pprint as pp
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import column_property
from sqlalchemy import select, func, literal_column
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
app.config['SQLALCHEMY_ECHO'] = False
db = SQLAlchemy(app)
from sqlalchemy import Column, Integer, String
class Book(db.Model):
__tablename__ = 'books'
id = Column(Integer, primary_key=True)
name = Column(String)
author_id = Column(Integer)
class Author(db.Model):
__tablename__ = 'authors'
id = Column(Integer, primary_key=True)
name = Column(String)
books_count = column_property( select([func.count()]).where(Book.author_id==id) )
db.create_all()
author = Author(name='Andrea')
db.session.add(author)
db.session.commit()
pp(author)
book = Book(name='Random book name', author_id=author.id)
db.session.add(book)
db.session.commit()
pp(book)
pp(("Andrea's books_count", author.books_count))
from pydantic_sqlalchemy import sqlalchemy_to_pydantic
pp(sqlalchemy_to_pydantic(Book))
pp(sqlalchemy_to_pydantic(Author))
Calling sqlalchemy_to_pydantic(Author)
goes in error:
Traceback (most recent call last):
File "/home/andreossido/.local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 747, in __getattr__
return getattr(self.comparator, key)
AttributeError: 'Comparator' object has no attribute 'default'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "script.py", line 49, in <module>
pp(sqlalchemy_to_pydantic(Author))
File "/home/andreossido/.local/lib/python3.8/site-packages/pydantic_sqlalchemy/main.py", line 32, in sqlalchemy_to_pydantic
if column.default is None and not column.nullable:
File "/home/andreossido/.local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 749, in __getattr__
util.raise_(
File "/home/andreossido/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
AttributeError: Neither 'Label' object nor 'Comparator' object has an attribute 'default'
I also tried with code below
column_property( db.Column(db.Integer), select([func.count()]).where(Book.author_id==id) )
With Integer column sqlalchemy_to_pydantic
doesn't throws exceptions, but author.books_count
is wrong
Because query select field authors.books_count
(that not exists)
class TestSchema(Base):
__tablename__ = 'test'
id = Column(INTEGER(10, unsigned=True), primary_key=True)
active = Column(Boolean, default=True, nullable=False)
Test = sqlalchemy_to_pydantic(TestSchema)
Test(id=1)
# Test(id=1)
Test(id=1).dict()
# TestSchema(id=1, active=None)
active
should be True
Hi,
I like this package, and personally, I used it twice in my little projects.
There are some pull requests (including mine) that are not replied to.
Is this project no longer maintained?
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.