vwkd / denokv-graphql Goto Github PK
View Code? Open in Web Editor NEWGraphQL bindings for Deno KV
Home Page: https://deno.land/x/denokv_graphql
License: MIT License
GraphQL bindings for Deno KV
Home Page: https://deno.land/x/denokv_graphql
License: MIT License
Support pagination for lists. Probably Relay's pagination spec, first
/ last
and cursor
. But nodes / connections add two nesting levels to schema, increases complexity.
Make versionstamp
field in delete input optional? Would allow to delete inconsistently, result can't be null then, but can't declare result type non-null in general...
Currently references aren't updated upon deletion and left dangling.
How to do this? How to find all references to and from deleted row? What to do with circular dependencies?
Or leave finding ids to user, instead accept multiple ids and versionstamps, make one atomic transaction like in #5
Improve versionstamp logic
Should expose option for eventually consistent queries? Would be for nested queries?
What would eventually consistent mean for assembled rows? Can only read root eventually consistent?
Should update existing record, fail if it doesn't yet exist. Allow partial update, i.e. in schema all columns are optional, but at query time at least one must be provided or it throws invalid input.
How should it update reference columns? Replace entry/ies or add to it? Should keep references in separate table?
Maybe allow
@defer
and @stream
?Probably disallow
Insert mutation should validate reference ids either are set in the same transaction or exist already. Currently, it allows to insert reference ids to non-existent rows which would then throw a database corruption error when queried.
[tableName, rowId, columnName]
instead of the whole row as an object as [tableName, rowId]
? Would allow get
ing only columns that actually needs, but would require multiple get
s for even the simplest queries.[tableName, rowId, columnName, 1n]
? Could use Deno KVs list
to natively do pagination, e.g. start, end, limit. But would need more reads, and separate reads make the whole query not atomically consistent within itself.Store references in separate table allows to be updated. With update mutation could only replace whole entry, not change references. How would indicate reference table in schema? With directive?
type Book {
id: ID!
title: String
author: Author @reference(table: "BookAuthor", source: "bookId", target: "authorId")
}
type BookAuthor {
id: ID!
bookId: ID!
authorId: ID!
}
Support querying multiple entries in single query? Should do multiple ids instead of single id? How to be atomically consistent? Should do search / filter without knowing ids?
Alternative to id
. Would require to maintain the mapping, update for each mutation, increases complexity. How to specify in schema? Using directive? How to specify in query which index (id, or secondary) is passed as argument?
Updates existing row or inserts new one if doesn't exist.
Implement atomic sum/min/max mutations from Deno KV?
This could be expanded to allow arbitrary mutations on existing data by letting the user provide a transformer function. Would need to check that is function, wrap in try catch block, call and await in case it's a promise. Would need to verify output is what expects.
bookById: Book!
extend schema
?atomic().check()
to guarantee that previous value didn't change (next value might have), or could use previous value's versionstamp to also guarantee that next wasn't updated in meantime? how to retry resolver chain if fails? would need to backtrack to root again.check
option in mutations and accept versionstringconcurrency
option in queriesdelete
should accept versionstampslist
for large number of entries (>500) with multiple batches could be inconsistent...Insert across tables in one atomically consistent set. Receives deep object as input, walks from deepest leaves to root, inserts each object into corresponding table.
But how should report the ids and versionstamps of the nested rows? Array of Result with additional table name entry?
Deep insert is complex to implement, needs to pick apart deep input object, generate the ids and match them up, just to stitch it back together at query time later. It largely defeats the purpose of relational data storage, since can't link new data to existing data, must always insert together, could almost just store the whole data as JSON if not for ability to query only part of it. Also cumbersome for user to define duplicate type hierarchy for input types and output types.
Better to let user build the relations. Just somehow ensure consistency... Would need to give up generating ids for user, instead let the user generate random UUIDs in advance, allow to insert multiple at same time, throw if any id already exists.
Could accept multiple rows in a mutation, and multiple mutations in a request. The mutations must all be committed in one atomic transaction. Order doesn't matter since there are no foreign key constraints in Deno KV, and atomicity makes either all succeed or all fail. Inserting valid references is up to the user?
mutation {
createBooks(data: [ { id: "xxx", ..., author: "bbb" }, { id: "yyy", ..., author: "aaa"} ]) Result @insert(table: "Book")
createAuthors(data: [ { id: "aaa", ... }, { id: "bbb", ... } ]) Result @insert(table: "Author")
}
isReference
should only check name ends in Connection
, then validate that is non-null object type
isType(type, type => type.name.endsWith("Connection"))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.