Giter Club home page Giter Club logo

urql-exchange-graphcache's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

urql-exchange-graphcache's Issues

Ensure idempotent and commutative updates

Now, this is a tough one but we may want to think about this for offline support that is reliable. A lot of this is well summarised in this talk: https://youtu.be/DEcwa68f-jY

We already have a concept of commutative updates. When we apply optimistic updates, they’re only applied to separate layers in a consistent order. This order can never break down. Once an optimistic layer is created and updated, this update is commutative and idempotent.

That means, the order of these changes doesn’t matter after an optimistic update is applied and it can be applied repeatedly without changes.

This doesn’t quite apply to all of our other updates. Our other updates that are non-optimistic are made to our source of truth, the main store.

When they’re applied again in a different order and have overlapping data, you may get a different result.

This will eventually be an issue on offline connections, when updates and queries are delayed and change order. It’s also an issue when we reapply queries and updates as we come back online.

Optimally we’d like to apply queries in a set order, their execution order, or out of order with some ordering guarantee. The same goes for updates.

Two ordering cases that we could address are:

  • Keep a clock counter on the initial operations that will lead to network requests (non-optimistic, non-local)
  • When results come back, we should never be able to override any field that from data that has a lower clock than the last writer to that field

These two guarantees would be enough to ensure consistency as far as I can tell... this is called a “Last Write Wins” map

I’ll try to keep this thread up-to-date with some ideas as we go ✌️

Support simplePagination cases as automatic resolver with cache misses supported

I just discovered urql and it's pure gold. This library was really needed. Apollo is great but monopolies always have their downsides.

I can't wait to switch to urql in my main big professional project. I have carefully read the doc to see how I would convert the Apollo features I use to urql. Everything is pretty much there (smart graph cache, updating cache on mutation, custom cache keys, etc) except maybe two things :

  • Something that would do the same job as fetchMore,
  • Support for the @connection directive.

For fetchMore, I can't handle my own custom state as I need the cache as my single source of truth, and I can't use the graphcache because from what I read there's currently no way to customize the merging of query results.

In my project, I have around 30 list queries with @connection directives. They don't follow the relay spec and look more like this :

query Comments($projectId: ID!, $parent: ID, $search: String, $sort: String, $sortOrder: Int, $limit: Int!, $page: Int!) {
  comments(projectId: $projectId, parent: $parent, search: $search, sort: $sort, sortOrder: $sortOrder, limit: $limit, page: $page)
    @connection(key: "comments", filter: ["projectId", "parent"]) {
    total
    items {
      id
      ...
    }
  }
}

To give you an idea, here is what my current fetchMore does :

fetchMoreImpl({
  variables: {
    ...variables,
    page: Math.ceil(comments.items.length / variables.limit) + 1
  },
  updateQuery(prev, { fetchMoreResult }) {
    if(!fetchMoreResult) return prev
    return {
      comments: {
        ...prev.comments,
        items: [
          ...prev.comments.items,
          ...fetchMoreResult.comments.items
        ]
      }
    }
  }
})

It would be really nice to have something for this ! I would switch to urql in the blink of an eye.
Thank you for your amazing work.

Crash on subsequent run of multiple 'concurrent' queries

I have a page where two components run a query each, say post(postID) and comments(postID).
On the fresh page load it works fine, but on navigation with react router (then there is some previous state) page crashes (on the second query) with

client.ts:205 Uncaught (in promise) TypeError: Cannot read property 'key' of undefined
    at Client.reexecuteOperation (client.ts:205)
    at exchange.ts:134
    at Set.forEach (<anonymous>)
    at processDependencies (exchange.ts:130)
    at updateCacheWithResult (exchange.ts:218)
    at wonka.es.js:920
    at wonka.es.js:377
    at wonka.es.js:377
    at wonka.es.js:1296
    at wonka.es.js:665

At first I thought it had something to do with list<->item relationship (post list and an individual post), but then I checked that it also happen on direct post->post transition.

ID type mismatch in `updates` if `id` is a number

Condensed example:

Post {
 id: ID  // defined and used as a number (int)
}

const updatePostQuery = (data: Data, _, cache: Store) => {
  const post = data.addPost as Post; // cast to be able to leverage real types

  // postID is not parsed to int in the response yet (it is a string)

  // so we need to forcibly parse it
  const postID = parseInt((post.id as unknown) as string);

  // while postID here is expected to be an int, query with a string does not return anything
  cache.updateQuery({ query: getPostQuery, variables: { postID } } 

To be fair I did not test that happens if I define id: number.

Returning `null` from `KeyGenerator` messes up the data in array query

Synthetic example schema

type Child {
  value: String!
}

type Parent {
  id: ID!
  child: Child!
}

type Query {
  getParents: [Parent!]
}

My intention is to all Children to always be embedded in the Parents. By default it works that way (at least is looks like that) but I get warnings ... Entities without keys will be embedded directly on the parent entity. If this is intentional, create a keysconfig forPostTextContent that always returns null.
If I do that warning suggests and add this to config

cacheExchange({
  keys: {
    Child: () => null,
  }
})

then query data becomes overwritten.

Real network request data

{
    "getParents": [
        {
            "id": 1,
            "child": {
                "value": "AAA"
            }
        },
        {
            "id": 2,
            "child": {
                "value": "BBB"
            }
        },
        {
            "id": 3,
            "child": {
                "value": "CCC"
            }
        }
    ]
}

while data returned from useQuery to the component becomes

{
    "getParents": [
        {
            "id": 1,
            "child": {
                "value": "CCC"
            }
        },
        {
            "id": 2,
            "child": {
                "value": "CCC"
            }
        },
        {
            "id": 3,
            "child": {
                "value": "CCC"
            }
        }
    ]
}

Add generic types for updates

I have the following code (in updates.Mutation) to update a list count after creating an item:

createItem: (result, args, cache, info) => {
    cache.updateQuery({query: ListDocument, variables: {parentId: args.input.parent}}, data => {
        data.parent.items.totalCount = data.parent.items.totalCount + 1;
        return data;
    })
}

There's currently no validation / hints on variables or data, which should be possible by passing the respective types as generics to updateQuery (similar to how useQuery/useMutation hooks work).

I also get a typescript error on data.parent.items:

Property items does not exist on type DataField


It would also be great to have the possibility to type the whole function createItem, so we could have validation / hints on args and result as well.

Right now I get another typescript error on args.input.parent:

Property parent does not exist on type string | number | boolean | Variables | ScalarObject | Scalar[] | Variables[]

Cache Control and TTL on entries

We'd like to verify whether it makes sense to store a TTL per entity or Query field.

As part of this it should be possible to evict entities when they reach a maximum age.

Does this make sense? Is there a valid use-case for this? Can this be achieved in a more elegant manner?

Pass custom variables to `update` and `optimistic`

I am dealing with querying from store a query that has different variables(and can vary) than the mutation performed. I am migrating fro apollo where we can use optimistic and update as second parameter when executing a mutation so we can pass different variables that are available in the same context. That doesn't seem to be an option in urql, so in that way we have to recreate all the updates and optimistic when setting up the client, which only receives 3 parameters, the current variables of the mutation, the cache and info object. let's get some example:

this is the query:

import gql from "graphql-tag";

export const CalendarQuery = gql`
  query calendar($sessionId: String, $from: Long!, $to: Long!, $timezone: String!, $focusedGroup: String) {
    calendar(sessionId: $sessionId, from: $from, to: $to, timezone: $timezone, focusedGroup: $focusedGroup) {
      practices {
        id
        durationMinutes
        label
        location
        teamName
        rgb
        start
        attendance {
          athleteName
          athleteFirstName
          athleteLastName
          _id: athleteGuid
          attendance
        }
      }
      events {
        id
        durationMinutes
        label
        location
        teamName
        rgb
        start
        attendance {
          _id: athleteGuid
          attendance
        }
      }
      games {
        id
        durationMinutes
        label
        location
        teamName
        rgb
        start
        attendance {
          _id: athleteGuid
          attendance
        }
      }
      workouts {
        id
        durationMinutes
        label
        location
        teamName
        rgb
        start
      }
    }
  }
`;

notice that to and from can vary as user wants to fetch sessions in a certain timespan.

now, I want to update or delete a session:

const UPDATE_SESSION = gql`
  mutation($sessionId: ID!, $timezone: String!, $language: String, $input: SessionInput!) {
    updateSession(sessionId: $sessionId, timezone: $timezone, language: $language, input: $input) {
      ... on Practice {
        id
        start
        durationMinutes
      }
      ... on Event {
        id
        start
        durationMinutes
      }
      ... on Game {
        id
        start
        durationMinutes
      }
      ... on Workout {
        id
        start
        durationMinutes
      }
    }
  }
`;

I execute my mutation:

await updateSession({
              sessionId: user.sessionId,
              timezone: "GMT",
              language: "en",
              input: { id: session.id, start: date.valueOf() }
            })

now I am going to set up my optimistic response callback:

updateSession: ({ input, ...variables }, cache, info) => {
      cache.updateQuery({ query: CalendarQuery, variables }, (data: any) => {

First Issue: I am not able to query the calendar from store as I dont have the correct variables.

Second Issue: I couldnt find any simple way to pass variables to that callback. I tried adding some extra variables in the execute mutation:

await updateSession({
              sessionId: user.sessionId,
              timezone: "GMT",
              language: "en",
              calendar,
              __typename: session.__typename,
              input: { id: session.id, start: date.valueOf() }
            })
updateSession: ({ input, calendar, __typename, ...variables }, cache, info) => {
      return { ...input, __typename };
    }

(see calendar as a custom variable which is not used in the mutation itself)

Seems also urql cleans the variables object and only pass the variables that are actually used in the mutation/query.

So, I am just looking for a solution, workaround or a method where I can pass variables to the update/optimistic callbacks so it can work for queries that has dinamic variables(as above) and not fixed ones that can be hardcoded.

Thanks in advance.

Package name prefixing

For consistencies sake, lets exclude the exchange prefix (devtools is just @urql/devtools).

Most libraries are going to be an exchange anyway so the prefix doesn't add much value.

cache-and-network clears previous data when refetching

Using cache-and-network after initial fetching i get data with stale true followed by undefined data and then again data with stale false

Wasn't cache-and-network supposed to never return undefined for data unless its and initial fetch?

Should updateQuery always return partial results?

I'm still learning the details of graph cache - so forgive me if some of my assumptions are incorrect

Currently, I believe the updateQuery function uses the same cache result policy as normal queries - which from the docs is: When the cache has enough information so that only optional fields in a given query are missing, then it delivers a partial result from the cached data. My question is - when we are updating cache after a mutation, would it make sense to always return partial results, even of some of the missing fields are required? Here's my use case:

A mutation causes a side effect, that we need to manually update in the cache. Lets say something like this:

const QueryOfAllDataEffected = gql`
  query _($listId: ID!) {
    __typename
    list(id: $listId) {
      __typename
      id
      fieldEffectedByNewTodo {
        __typename
        id
        ...
      }
      todos {
        __typename
        id
      }
    }
  }
`
createClient({
  ...,
  updates: {
    Mutation: {
      addTodo: (result, args, cache) => {
        cache.updateQuery(
          {
            query: QueryOfAllDataEffected
            variables: { listId: args.listId }
          },
          data => {
            ...
          }
        )
      }
    }
  }
})

After the mutation is done, 2 things need to be updated. 1) add the new todo item, and 2) update the "fieldEffectedByNewTodo". However, in the actual application there might be several different queries referencing these same fields (this is the benefit of using graph cache), but we have no guarantees of which ones have been accessed. So even though both are required, its possible that we only have one in the cache so far. What I'd like to do is to be able to check if either "fieldEffectedByNewTodo" or "todos" is already in cache, and manually update whichever ones are. And if either is not already in cache - just leave them as "null" so that they will be fetched whenever a query tries to access them. The problem is that since both todos and fieldEffectedByNewTodo are required, data will always be null unless both have already been queried and are in cache.

Sorry I know that's probably not the best example, but hopefully I got my point across. Maybe there's a better way to be updating the cache that I'm just missing? Thanks!

Offline and persistent cache support

Offline

We all think about this in the modern PWA-era but there's a lot to this. We'll have to keep track of what requests the user needs to send when the connection is restored, after these requests are sent there will MOST likely be several optimistic entries to clear.

Operations

So for knowing what operations to cache it should be sufficient to only cache mutation operations. These will then be kept in a map<key, operation> and be persisted to some indexedDB/localStorage when we kill the application and they haven't been dispatched yet.

The hard part about this is that we would have to restore the optimisticKeys in the exchange, this makes me think about moving these to our instance of store instead. Since the serialisation of entities, links and optimisticKeys could then happen from one place. This brings as additional advantage that it can be done with one restore method.

One concern would be the read/write speed of killing/rebooting the cache in this state. The HAMT structure is quite hard to serialise taking in account that it will contain optimistic values mixed with normal ones.

Connection checking

This should be easily doable by means of navigator.online, we could buffer all requests until we come online and then send them in the correct order one by one to avoid concurrency problems. The difficult part here wold be that we buffer up until all operations are dispatched, this means that if the user performs another action while we are emptying the queue this could take a while to get a response (given we are using optimisticResponses though).

Ideally when we see we are offline we filter all queries, and just keep them incomplete. When we see we are going offline all subscriptions should receive an active teardown.

Exchange

When reasoning about this my thoughts always wonder to a separate exchange to manage the operation buffering and to incorporate the restoring/serialising inside the graphCache. This has a bit of an overlap but I think it's sufficient reason to keep them separate.

Persistance

Here I'm having issues seeing how we could effectively solve this, we have the schema now so we could potentially just iterate over the whole schema and write it that way but that won't cover the case where people just want persisted-cache without the whole schema effort.

What scares me the most about this is that localStorage isn't the ideal candidate for persisted cache but by using indexedDB we exclude about 5% of the browser population.
IndexedDB seems to ask for permission if a blob is >50MB on Firefox, that's about it no explicit size limitations for even a single data field.

The max size for localStorage is 10MB so I don't really think this is sufficient for big applications, since the initial cost of the data structure is also there. We could strip everything down but how do we rebuilt it then, maybe by bucket size?

This is a brain dump of what I've been thinking about and is by no means a final solution but I think this could serve as an entry to finding the solution to what feels like a really awesome feature.

Other relevant solution: https://github.com/redux-offline/redux-offline/tree/v1.1.0#persistence-is-key

This uses redux-persist under the hood that also relies on indexedDB under the hood. Since this is a reliable and widespread solution I think it's safe to resort to indexedDB and fallback to localStorage when needed.

For react-native we can easily resort to the AsyncStorage module. It seems that AsyncStorage isn't 100% safe either since on android this errors out when you exceed a 6MB write.

Introducing some way of leaving certain fields/queries out seems very mandatory to me since in the test described underneath we see that we're hitting the limits of localStorage pretty quickly.

Test

I did a small test with our current benchmarking where I serialised 50k entities and just wrote them to a JSON file to look at the size:

ENTITIES 14260659B 14.260659MB
Links 664618B 0.664618MB

This already exceeds the limits of localStorage and would cause a prompt in indexedDB asking for permissions saving this amount of data.

Code used:

const urqlStore = new Store();
write(urqlStore, { query: BooksQuery }, { books: tenThousandBooks });
write(
  urqlStore,
  { query: EmployeesQuery },
  { employees: tenThousandEmployees }
);
write(urqlStore, { query: StoresQuery }, { stores: tenThousandStores });
write(urqlStore, { query: WritersQuery }, { writers: tenThousandWriters });
write(urqlStore, { query: TodosQuery }, { todos: tenThousandEntries });

const entities = JSON.stringify(urqlStore.records);
const links = JSON.stringify(urqlStore.links);

fs.writeFileSync('./entities.json', entities);
fs.writeFileSync('./links.json', links);

const { size: entityFileSize } = fs.statSync('./entities.json');
const { size: linkFileSize } = fs.statSync('./links.json');
console.log('ENTITIES', entityFileSize, entityFileSize / 1000000.0)
console.log('Links', linkFileSize, linkFileSize / 1000000.0)

Wild thoughts

I've been thinking about maybe making a distinction between a storage.native and a storage file. This way we could leverage web workers and application cache to write our results at runtime instead of just when we close the application.

Requirements

To implement persistent data we would have to implement an adapter with an API surface for getting setting and deleting. People can in turn pass in every storage they would like, this way people who use something like PouchDB can write an adapter and just use that.

We should decide on an approach when to write, after every query? This would make us have to write after every optimistic write as well which makes everything a tad harder certainly since it's going to be hard to incrementally write changes from our HAMT structure. I think it's better to work with a hydrate and exit approach. This could make writes take up more time but in the end would require a whole lot less logic.

We would need an approach that can evict certain portions of the state from being cached, examples would be an exclude/include pattern. When we include something that will be the only thing being cached. When we exclude something all but that exclude will be cached. These should be mutually exclusive.

When not supplied with a schema how would we arrange for excluding data.

Drew up a diagram of how I expect this to happen, the code for the offline part was easy to write and is done.

Screenshot 2019-09-05 at 15 05 40

Null value inconsistency

Request Query

query RoomQuery($roomId: ID!) {
  room(id: $roomId) {
   id
   name
  }
}

Server response

{
   "data":{
      "room":null
   }
}

Without graphcache

  1. { fetching:true, data:undefined }

  2. { fetching:false, data:{ __typename:"Query", room:null } }

With graphcache and resolver of type { Query: { room: (parent, args) => ({ __typename: "Room", id: args.id }) } }

  1. { fetching:true, data:undefined }

  2. { fetching:false, data:null }

Implement benchmark comparing our operations to apollo-cache-inmemory

We need some kind of reference point and performance budget to be able to estimate the impact of upcoming changes, especially before we address #4

Some simple write and query benchmarks where we write bulk data and read it back would do the trick. Such a data sample should at least:

  • contain 10,000+ entities
  • 5 different types with 5 fields each
  • nested lists of a depth of at least 3

Verify whether resolvers cover all use-cases

We'd like to collect more use-cases for resolvers. Right now we have some out of the box patterns, relayPagination and simplePagination, but we haven't explored all possible cases in which resolvers may make sense.

As part of this task we'd like to explore more use-cases and ensure that all possible GraphQL cache resolvers that can be implemented with other clients, can be implemented with Graphcache

Implement a (new) mark & sweep GC

We want to add invalidation as a last resort to our normalised cache. It should be possible to do the following for instance:

updates: {
  Mutation: {
    deleteAllPosts: (_, __, cache) => {
      // Example of invalidating the user, a single entity on Query
      cache.invalidate([cache.resolve('Query', 'me')]);
      // Example of invalidating all posts matching a pattern (in this case just a typename)
      cache.invalidate(cache.resolveAll('Post'));
    }
  }
}

In such a case it'd become vital to have the GC back in place. We had a simple GC, which was not sufficiently flexible to work with our newer pessimism store.

A new plan that is both efficient but needs a couple of changes is the following:

  1. we make sure that our key separator is encoded in all keys, so : is always the separator
  2. we create a mark & sweep GC that only deals with links (so we don't have to walk entities potentially?)
  3. every entity that doesn't have a path of connecting links from Query will be deleted
  4. to delete entities we match them by part of their key (hence the change in 1.)

This will also need a new iterable method on pessimism, possibly toStream

Edit: A consideration that'll be harder to think about are optimistic entries. How do we ensure that optimistic entries will not affect the GC? Do we differentiate between iterating over optimistic and non-optimistic values in pessimism? Do we iterate all of them and make each iteratee [key, value, isOptimistic]?

Optimistic updates lead to weird partial results (`null` nodes)

Trying to use optimistic updates currently produces weird results for me. I'll try to add a reproduction later, but not sure when I'll get to that 🙈

Query:

query Tree($rootId: ID!) {
    root(id: $rootId) {
        id
        items {
            edges {
                node {
                    id
                    name
                }
            }
        }
    }
}

Mutation:

mutation UpdateItem($id: ID!, $name: String) {
    updateItem(input: {id: $id, name: $name}) {
        item {
            id
            name
        }
    }
}

Optimistic:

updateItem: (variables, cache) => {
    const {input} = variables

    const optimistic = {
        item: {
            ...input,
            __typename: 'Item'
        },
        __typename: 'updateItemPayload'
    };

    return optimistic;
}

The initial optimistic update correctly updates the Query data. The issue seems to arise after that, and manifests in a partial result, with only the optimistic Item being set, and all other node fields being set to null.

I don't really understand why that's the case, maybe I'm just using it incorrectly?

Populate exchange + support for queries with variables

Looking to explore how we can extract nodes in a Query document which contain variables (in the populate exchange).

Options:

  • Ignore nodes with variables
  • Auto populate variables with last used values
  • Add a decorator to allow the user to choose what variables to include/exclude .etc

CC @imranolas

Passing introspection schema breaks the client

Client breaks with Uncaught (in promise) Invariant Violation: Invalid type: The type `undefined` is not an object in the defined schema, but the GraphQL document is traversing it

Reproducible with the simplest schema

type Post {
  id: Int!
  author: String!
}

type Query {
  post(id: Int!): Post!
}

Cache is set up like that

exchanges: [
      dedupExchange,
      cacheExchange({
        schema: introspectionSchema, // see console.log below
      }),
      fetchExchange,
    ],

introspectionSchema looks like that:
image

not recognizing id

For my whole app it is working great, but for this one page it keeps telling me that its needs an id to cache the request and keeps resending the query. I am querying for an id on every subfield and also verified that my backend is sending those. I though it might be connected to arras, that are requested This is it:

const GET_PLACE = gql`
  query GET_PLACE($id: ID!) {
    place(where: { id: $id }) {
      id
      title
      description
      createdAt
      updatedAt
      isTemporary
      startTime
      endTime
      categories {
        id
        urban
        remote
        activity
        viewpoint
        concert
        fleaMarket
        foodMarket
        artwork
        streetFestival
        artsy
        charity
      }
      accessibility
      cleanliness
      crowdiness
      view
      relaxing
      visits
      accessibilityCount
      cleanlinessCount
      crowdinessCount
      viewCount
      relaxingCount
      location {
        id
        lat
        lng
      }
      images {
        id
        original
      }
      comments {
        id
        text
        createdAt
        author {
          id
          name
        }
      }
      author {
        id
      }
    }
  }
`

and this the schema:

type Place {
  id: ID!
  createdAt: DateTime!
  updatedAt: DateTime!
  title: String!
  description: String
  location: GeoJson!
  author: User
  comments(
    where: CommentWhereInput
    orderBy: CommentOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [Comment!]
  categories: Categories
  ratings(
    where: RatingWhereInput
    orderBy: RatingOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [Rating!]
  images(
    where: ImageWhereInput
    orderBy: ImageOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [Image!]
  isTemporary: Boolean!
  startTime: DateTime
  endTime: DateTime
  visits: Int
  accessibility: Float
  view: Float
  cleanliness: Float
  crowdiness: Float
  accessibilityCount: Int
  viewCount: Int
  cleanlinessCount: Int
  crowdinessCount: Int
  relaxing: Float
  relaxingCount: Int
  source: String!
  posted: Boolean
  interaction: Float
}

Directive composition

Idea

Related to #146 I'd like to extend our traverser to have a certain set of directives like the one proposed in the issue @extra. When passed to the cacheExchange it would automatically stop at these directives, strip them out and execute a callback allowing you to return an altered operation, ....

This would for example allow you to add more variables, allow the populateExchange to become a HOC passed into this object.

Rough example

cacheExchange({
  directives: {
    extra: (op, directiveArgs) => ({ ....op, variables: { ...op.variables, ...directiveArgs }}) 
  }
})

This is very rough and I'll most likely alter this when hacking on it but I would like to send this in to get some more idea's

Handle errors in updates

I think there should be a clean way to handle mutation errors inside the update functions. Right now we only have access to result: Data. We should either provide the errors as well, or just not call the update if there's an error?

Dynamic key generation / ignore Relay pagination helpers

Using a Relay-based (following their spec) API with pagination results in lots of warnings, because all the Connection and Edge objects do not have an id. Maybe we could add a way to dynamically generate keys with a function that get's passed the Type, or just add an option to ignore all these pagination helper types?

Cache not being hit for single items, that were previously requested in a list

I was under the impression that the following queries should result in the cache being hit, but it just sends two requests. (I load the ItemList in a main screen, and load Item inside a subview, after clicking on a list item). The __typenames are correctly requested and match.

query ItemList {
    items {
        edges {
            node {
                id
                name
            }
        }
    }
}
query Item($id: ID!) {
    item(id: $id) {
        id
        name
    }
}

Forward is not a function when used with react-ssr-prepass

I am using Next.js together with react-ssr-prepass.
I wanted to use this cache in my project. I only replaced the standard cache with this one and did not apply any config. When starting my site, I got the following error:

forward is not a function

TypeError: forward is not a function
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/@urql/exchange-graphcache/dist/urql-exchange-graphcache.js:1038:32
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:323:14
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:1072:7
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at captureTalkback (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:168:10)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:893:14
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:1273:7
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:973:14
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:933:14
    at _1 (/Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:46:12)
    at /Users/bjoern/projects/Plezzles/frontend/node_modules/wonka/dist/wonka.js:323:14

I am using the latest version of urql-exchange-graphcache and urql

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.