Giter Club home page Giter Club logo

exograph's Introduction

Exograph

Getting StartedDocumentationExamples


Exograph is a declarative way to create flexible, secure, and performant backends that provide GraphQL query and mutation APIs. Exograph lets you focus on your domain model and business logic, freeing you to pursue more creative work on your application.

Installation

Get started by following the Getting Started guide.

Documentation

For more information, see the Exograph documentation.

Examples

Check out the Exograph examples repository for sample projects to explore how Exograph fits into technologies such as Next.js, Apollo, Urql, Auth0, Clerk, Tailwind, and more.

Development

If you would like to build Exograph locally, please see DEVELOPMENT.md.

exograph's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

exograph's Issues

Centralize and improve name generation

Currently we generate names for queries, tables, foreign key etc (see for example, pk_query_name and collection_query_name. We need to centralize it so that we have a consistency.

Support annotation parameters

We need multiple parameters for use case of @access:

@access(query=AuthContext.role="ROLE_ADMIN" || self.public, mutation=AuthContext.role="ROLE_ADMIN")

Make annotations a part of type system

Currently, we don't typecheck annotations (and there is no type corresponding to it).

One way we can implement this: Populate known annotation types in the same way we populate primitives (perhaps put annotations in a different namespace). Eventually, the "known annotations" should be replace with each plugin introducing its own set of annotations.

Support arrrays and objects database columns

In addition to the standard primitive we support, we need to support array and object database columns). These would be treated as "primitive" from a certain perspective (specifically, no relation is implied just because we find and array type field), but not others (it should be possible to query based on a subfield of a json object).

Supporting the object column will require:

  • A way to declare a type
  • Convert value received in queries and variables to the destination json type
  • Query based on subfields
  • Support for opaque types (in case users want to store arbitrary data interpreted by other tools)

Support mutations to set the ids for the many side of the relation

Consider a model with Team -> Player*. We need to support the create and update mutation to set the players (helpful when updating the team from UI)

 mutation {
  createTeam(data: {name: "Foo Team", players: [{id: 1}, {id: 2]}) {
    id
    name
    players {
      id
      name
    }
  }
}

Currently, we allow co-creating nested objects. In the above example, we enable the creation of new players when creating a new team. So, this issue is about using existing players when creating a new team.

Fix clippy warnings related to borrowed_box

We have the following warnings. The fix itself in the shown code is easy (basically, do what clippy says), but it makes tests (specifically the use of various assert_binding macros) not compile.

warning: you seem to be trying to use `&Box<T>`. Consider using just `&T`
  --> payas-sql/src/sql/mod.rs:66:21
   |
66 |     pub params: Vec<&'a Box<dyn SQLParam>>,
   |                     ^^^^^^^^^^^^^^^^^^^^^ help: try: `&'a dyn SQLParam`
   |
   = note: `#[warn(clippy::borrowed_box)]` on by default
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#borrowed_box

warning: you seem to be trying to use `&Box<T>`. Consider using just `&T`
  --> payas-sql/src/sql/mod.rs:70:38
   |
70 |     fn new(stmt: String, params: Vec<&'a Box<dyn SQLParam>>) -> Self {
   |                                      ^^^^^^^^^^^^^^^^^^^^^ help: try: `&'a dyn SQLParam`
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#borrowed_box

warning: you seem to be trying to use `&Box<T>`. Consider using just `&T`
  --> payas-sql/src/sql/mod.rs:74:37
   |
74 |     fn tupled(self) -> (String, Vec<&'a Box<dyn SQLParam>>) {
   |                                     ^^^^^^^^^^^^^^^^^^^^^ help: try: `&'a dyn SQLParam`
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#borrowed_box

warning: 3 warnings emitted

Improve error handling in payas-test

Once we establish a general error-handling pattern in the rest of the codebase, we should take another look at how we handle errors in payas-test.
Currently if something bad happens while initializing, it's possible to panic and leave things like temporary databases without cleaning them up.

Handle errors in payas-test runner instead of throwing them

Currently we throw errors that happen during the init or setup sections (with ? operator). If we throw these errors, we skip out on cleanup that needs to occur (database deletion, killing the server process, etc.). We should handle them like we do the test Result types.

Make all fields in update mutation to be optional

Currently we use the same type for both creation and update. As a result, we force that user must supply all fields in update. Instead, we should make all field of the `Update1 type optional so users can specify a subset of fields to be updated.

Implement LSP

Since our language is growing, we should implement a Language Server Protocol so that we can offer code completion, jump to definition, error reporting, linting etc.

We may also offer connecting it to a live database to make:

  • helpful suggestion (auto complete field name/type based on database column info)
  • live migration (update development database to match the current model)
  • sample query execution

Introduce the "yolo" mode

The command would be (see #17, which this depends on):

> clay yolo sports.payas

This will create a db (through docker or an existing postgres server) according to the provided model and start a server on a free port. Then it will act as "clay serve" except it will also apply migrations automatically (even destructive changes).

This will simplify demos and help people try out payas quickly.

Create docker image for payas

Then user will be able to run payas with "docker run..." and this will allow running in all cloud environments that need/accept a docker image (Google Cloud Run, Digital Ocean, fly.io)

Support authentication through payas-test module

  • Introduce a mechanism to specify the requesting user. Perhaps something like:
operation: |
    query($id:Int!) {
        venue(id:$id) {
            name
        }
    } 
variable: |
    {
        "id": 1
    }
user: |
    {
       "sub": 20
       "role": "ROLE_ADMIN"
    }
  • Use a random string as the PAYAS_JWT_SECRET env variable and use that to both start the server and compute the JWT token after adding additional data needed to the "user" specified. For example,

JWT header:

{
  "alg": "HS256",
  "typ": "JWT"
}

Payload:

{
  "sub":<user.sub>,
  "iat": <now in seconds since epoch>,
  "exp": <now+one hour in seconds since epoch>,
  "role": <user.role>
}
  • Make requests with the header Authorization: Bearer <the computed JWT token>

Support database migration

We need to support database migration based on:

  • Model changes
  • Model deviation from database

The first form could be:

> payas migrate [model.payas@]<old-git-hash/tag> model.payas[@main]

Here we will compute schema based on the two versions of the model and create a migration script.

The second form could be:

> PAYAS_DATABASE_URL=postgres://.. payas migrate model.payas

Here we will load the schema from the database, compute the schema for the current model, and create a migration script.

Important: In no case, we will drop any table or column. We should however create commented code for deletion so that users can examine and uncomment or adjust (for example, rename instead of delete).

Support remote REST and GraphQL apis

Not all data can come from just the db. Therefore we need to support integration with REST and other GraphQL services. This requires:

  • Support for declaring remote types (see #6 for a similar requirement)
  • Support for declaring remote services including its attributes such as read-onlyness
  • Executing remote services with appropriate authentication information

Note that if we implement this right, supporting multiple databases (through Payas) will be quite easy.

Accept column attributes in any order

There is nothing inherently ordered about column attributes so we should be able to accept @pk @column("foo") or @column("foo") @pk. We do use the permutations combinator, but that doesn't seem to work as expected.

Support nesting type modifiers

Currently, we cannot represent, for example, a ModelField of the List[Int]! (non-optional list) type.

As a result, we cannot support non-optional list in update mutation (see compute_input_fields).

We need to rework how we represent types so that we can allow for arbitrary nesting.

Create a testing infrastructure

We need a simple way to write end-to-end tests. Core requirements:

  • Support reuse of any part of code (for example, we should be able to write multiple tests per setup)
  • Support parallel execution (we expect tests to grow into thousands so running them in parallel is critical)

A possible format (each file reference may also be inlined):

setup:
    schema.sql (optional; to emulate  pre-existing database)
    model.payas (if the schema is not provided, we will use the schema suggested by the model)

init:
    commands.sql (a way to directly update database; useful to simulate shared database with updates from other tools as well as avoid possible bugs in our graphql mutations)
    mutations.gql (a graphql way of updating database)

tests:
   test:
      - action.gql
      - expected.json

Each gql file will emulate issuing a graphql command:

operation: (inlined code or a reference to a file with the graphql queries/mutations)
  query ($id: Int!) {
	pet(id: $id) {
            name
        }
  }
variable: (inlined code or a reference to a file with the json for variables)
   {
      id: 1
   }
user:
    JWT token or info to create a JWT token (which we can do only in the "test" mode

When we run each test, we need to:

  • Create a fresh database (to allow us to run tests in parallel)
  • Setup per the setup section
  • Initialize per the init section'
  • Execute queries in parallel, but mutation sequentially
  • Report failures

Make schema quote the table and column names

In postgres, if you CREATE TABLE myTable ..., it will create the table with mytable name. So we need to quote the table names: CREATE TABLE "myTable" .... The same goes for column names.

Improve support for primitive types

Currently we supports only Int and String types (and with not so valid operators for String).

  • Expand support for all common database types
  • #90
  • Emit appropriate type when creating the schema from the cli tool
  • Support proper operators for each type (for example, startsWith, endsWith, and like for String; (implicitly) equals for boolean)
  • #101
  • Support parser for each of these changes

Implement a zero-copy mechanism to push data received from the database to http

See executor.rs. We create a new string based on the one received from the database to put around { "data: and }. We should be able to directly serialize { "data: then the response from db, and then finally a }.

Note that although it is possible for us to have db to include { "data: and }, that approach doesn't work when we have multiple operations in a single payload.

Reject multiple same-named or same-aliased queries

Currently, we accept and incorrectly process queries such as:

{
  venue(id: 1) {
    id
    name
  }
 venue(id: 2) {
    id
    name
  }
}

and

{
  venue(id: 1) {
    id
    name
  }
  venue: venue(id: 2) {
    id
    name
  }
}

In each case, incorrectly returning invalid json:

{"data": {"venue":{"id" : 1, "name" : "venue1"}, "venue":{"id" : 2, "name" : "venue2"}}}

Populate request context

Populate context based on request data (cookies, headers, accessing ip address, etc.). This can then be coupled with the rest of the system as a source of data, implement access control, etc.

Format:

context <name> {
   <field_name>: <field_type> [<annotations>],
   ...
}

Specific use cases:

  1. Populate the auth context based solely on the JWT claim (requires all info needed for authorization available as a part of the JWT token).
context AuthUser {
   id: Int @jwt("sub") // Populate the id field from the "sub" JWT claim (apply any conversions to sub to make it an Int)
   roles: [String] @jwt // Populate the roles field from the "roles" JWT claim
}
  1. Populate the auth context based on an opaque token (say an encoded user id)
context AuthUser {
   user: User { id: @cookie("session_id") } // User such that its id == session_id cookie value
}

model User {
   id: String @pk
   email :String,
   roles: [String]
   ...
}

Will need some custom logic to translated between the encoded session_id and user.id

  1. Populate the auth context partially based on the JWT token (with the JWT claims used as a cache--except for some primary or alternative key to allow extracting the remaining info from database)
context AuthUser {
   user: User { id: @jwt("sub"), roles: @jwt } // "roles: @jwt" ==> "roles: @jwt("roles")"
}

model User {
   id: String @pk
   email :String,
   roles: [String]
   ...
}

Now if some auth rule relies only on user id and/or user role, no database operation will be needed for extracting other user info such as the email. But if it does (@access(query=AuthUser.email.endsWith("payalabs.com")), then we will query the database (say through a join with the main query) to extract the email and enforce the rule.

  1. Populate tracking info
context Tracking {
   id: String @cookie("tracking_id")
   ip: String @request("peer_addr") // to be fully specified how to access such request attributes
}
  1. Support anonymous shopping cart
    Let anonymous users add to the shopping cart and then associate that cart with the authenticated user upon login. May need to plugin some custom logic to parse the cookie and translate into model fields.
context AnonShoppingCart {
   cart: ShoppingCart @cookie("cart")
}

context ShoppingCart {
   items: ...
}

Needed for #16

Update mutation fails when variable includes an object

The following mutation fails

mutation UpdateVenue($id: Int!, $data: VenueInput!) {
  updateVenue(id: $id, data: $data) {
    id
    name
  }
}
{
  "id": 1,
  "data": {
    "name": "v1"
  }
}

But an equivalent without data being a variable succeeds:

mutation UpdateVenue($id: Int!) {
  updateVenue(id: $id, data: {name: "Jain Temple1"}) {
    id
    name
  }
}

Simplify and generalize the cli interface

Currently, the cli does only one thing: generate schema. Soon we will need it to do more. So we need to generalized the interface to support the concept of subcommand like:

> clay schema create [sports.clay]
> clay serve [sports.payas] --no-watch
> clay migrate [sports.payas] postgres://...
> clay migrate [sports.payas] git@abcd
> clay yolo
> clay verify [sports.payas] postgres://... <-- verify that the schema is compatible with the model
> clay model import postgres://... <-- create index.clay based on the database schema
> clay build  --no-introspection --persisted-queries concerts.cache.gql --lock-persisted-queries
> clay test integration-tests

Defer reference constraint to after creating tables

Depending on how we order table creation, we may end up referencing table that is yet to be created. For example,

CREATE TABLE concerts (
        id SERIAL PRIMARY KEY,
        title TEXT,
        venueid INT REFERENCES venues
);

CREATE TABLE venues (
        id SERIAL PRIMARY KEY,
        name TEXT
);

Here concerts REFERENCES venues that is yet to be created.

While we could reorder the table creation for this specific case, that won't help in circular referencing cases. Instead, we should not add REFERENCES ... in CREATE TABLE and add constraints after all tables are created.

Support subscription

This will need to be broken into multiple issues.

One way to implement subscription (high-level summary):

  • Use pg_notify.
  • Upon receiving a subscription request:
    • set a trigger on relevant tables for that subscription to execute a pg_notify
    • set up a listener on the Claytip side write to client's websocket

Implement authorization

  • Implement an AST model
  • Parse authorization rules
  • Use rule to check role and affect the SQL

Run multiple operations in parallel

Consider:

{
  concerts {
    id
    venue {
      id
    }
  }
  venue(id: 1) {
    id
    name
  }
}

We should to run the concerts and venue query in parallel.

However, we need to be aware of a caveat: what if an interceptor for a query performs a mutation?

Generalize column attributes and constraints

Currently, we have fields such as is_pk and is_autoincrement, which are too specific forms of constraint and attribute. We need to generalize them so we can support uniqueness constraint, default value attribute, etc.

Replace argument to @jwt to an expression instead of a string

Currently, AuthContext must be expressed as follows:

context AuthContext {
  id: Int @jwt("sub")
  role: String @jwt("role")
}

It will be better if we could express "sub" and "role" as expressions. Then we should be able to express nested fields properly:

context AuthContext {
  id: Int @jwt(sub.id)
  role: String @jwt(role)
}

This might need to introduce another type to describe the JWT payload.

Support field-level auth control

A specific use case:

@access(query = AuthContext.role == "ADMIN" || AuthContext.id == self.id, mutation = AuthContext.role == "ADMIN" || AuthContext.id == self.id)
model User {
  id: Int @pk @autoincrement
  name: String
  membership: Membership?
}

@access(query = AuthContext.role == "ADMIN" || AuthContext.id == self.user.id, mutation = AuthContext.role == "ADMIN" || AuthContext.id == self.user.id)
model Membership {
  id: Int @pk @autoincrement
  kind: String
  user: User
 spouseInfo: String // In real app, more detailed
}

Here we will like users to edit their membership only to the extend of updating the spouseInfo. In other words, users should not be able to assign their membership to another user or change the kind (those must be done by an admin).

Another example:

@access(self.published || AuthContext.role == "admin")
type Concert {
   @pk id ...
   notes: String @auth(AuthContext.role == "admin").
}

Here, notes should be accessible only to "admin"s regardless of if the concert is published.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.