graph-gophers / dataloader Goto Github PK
View Code? Open in Web Editor NEWImplementation of Facebook's DataLoader in Golang
License: MIT License
Implementation of Facebook's DataLoader in Golang
License: MIT License
Im using dataloader V7 with generic and i have a simple test func ..
func TestBatchByIdDataloader(t *testing.T) {
requests := []string{"100", "200", "200", "300", "100", "200", "200", "200", "200", "200", "200"}
loader := dl2.NewBatchedLoader(func(ctx context.Context, ids []string) []*dl2.Result[string] {
t.Logf("%+v, %v", ids, len(ids))
results := make([]*dl2.Result[string], len(ids))
for i := 0; i < len(ids); i++ {
results[i] = &dl2.Result[string]{"v", nil}
}
return results
})
for _, v := range requests {
thunk := loader.Load(context.TODO(), v)
_, _ = thunk()
}
}
this test output is
=== RUN TestBatchByIdDataloader
dataloader_test.go:42: [100], 1
dataloader_test.go:42: [200], 1
dataloader_test.go:42: [300], 1
--- PASS: TestBatchByIdDataloader (0.05s)
it removes the duplicate requets for id 200 but why i don't get the batched slice including all 3 ids ?
In ThunkMany
, should the returned error array have the same length as the data array? How would we know which errors relate to which interface{}
otherwise?
The cache in dataloader alone is an incredibly useful tool for backing up requests, but one of the things that I am interested in with regards to the added complexity of a data loader is if the cache is being useful. Id like to track the hit-rate of my cache, but the current tracer does not provide enough information to inject metrics.
Would there be any way to add metrics tracking of some form? (I use prometheus metrics so direct support of those would be nice, but an adjustable interface like used for tracing would likely be better.)
Load()
may be called multiple times before batchFn
is called. If Load()
is called with various contexts, which one is ultimately passed to batchFn
?
What do you think about improving the panic handling? I was thinking of a custom handler or passing an error to all results in case of a panic? Or do you believe one should do this in the batchFunc itself?
In v1-v4
of this library, LoadMany()
does not return in deterministic order. That is, a call to LoadMany("A", "B")
can give results [ResultA, ResultB]
.
To reproduce, run the unit tests multiple times:
$ go test -count=1000 .
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/test_LoadMany_returns_len(errors)_==_len(keys) (0.02s)
dataloader_test.go:103: Expected an error on the first item loaded
dataloader_test.go:107: Expected second and third errors to be nil
It seems like the cause may be the thunk
creation happens concurrently in a goroutine, so the batch of keys passed to the batch function may end up out of order.
Hello all,
I receive a panic when using the loader with the NoCache option. The same code works fine with InMemoryCache passed or by default.
Anyone managed to make the NoCache working in v6.0.0
ctx = context.WithValue(ctx, k, dataloader.NewBatchedLoader(helloBatchFunction, dataloader.WithCache(&dataloader.NoCache{})))
And the time I invoke the Load(ctx, key) function, I receive some internal panic in the dataloader package.
panic({0x18b7660, 0x277de20})
/usr/local/go/src/runtime/panic.go:844 +0x264
github.com/myorg/myrepo/internal/loader.(*pairsTournamentLoader).loadBatch(0x40006b3380, {0x1c5e9c8, 0x4000daed20}, {0x4000dac9c0, 0x3, 0x4})
/app/internal/loader/hello_loader.go:73 +0x600
github.com/graph-gophers/dataloader.(*batcher).batch.func1(0x4000de8df0, 0x4000daeea0, {0x1c5e9c8, 0x4000daed20}, {0x4000dac9c0, 0x3, 0x4}, 0x4000f29e80)
/go/pkg/mod/github.com/graph-gophers/[email protected]+incompatible/dataloader.go:432 +0xb4
github.com/graph-gophers/dataloader.(*batcher).batch(0x4000daeea0, {0x1c5e9c8, 0x4000daed20})
/go/pkg/mod/github.com/graph-gophers/[email protected]+incompatible/dataloader.go:433 +0x320
created by github.com/graph-gophers/dataloader.(*Loader).Load
/go/pkg/mod/github.com/graph-gophers/[email protected]+incompatible/dataloader.go:241 +0x568
I would like to understand the reasoning behind this BatchFunc type especially why is it a Slice of Pointers as it could potentially lead to memory leakages, as the opposite could even perform better* I think it would be better for it to be just a slice
*some reference:
https://groups.google.com/g/golang-nuts/c/C2Ir0GI2gEk/m/fO3Zte1sAgAJ
@mjq discovered an issue with a unit test while fixing an incorrect CI configuration in #73:
I can reproduce this failure, but only rarely - on my laptop, the test failed only 3 out of 10000 runs.
$ go test -v -race -coverprofile=coverage.txt -covermode=atomic -count 10000 | grep FAIL
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/allows_clearAll_values_in_cache (0.02s)
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/allows_clearAll_values_in_cache (0.02s)
--- FAIL: TestLoader (0.00s)
--- FAIL: TestLoader/allows_clearAll_values_in_cache (0.02s)
FAIL
FAIL github.com/graph-gophers/dataloader/v6 390.973s
We should dig into what causes this test to fail non-deterministically and then fix the issue.
hello, I've been tinkering with your product by trying to do what the dataloader(facebook JS version)
var UserType = new GraphQLObjectType({
name: 'User',
fields: () => ({
name: { type: GraphQLString },
bestFriend: {
type: UserType,
resolve: user => userLoader.load(user.bestFriendID)
},
friends: {
args: {
first: { type: GraphQLInt }
},
type: new GraphQLList(UserType),
resolve: (user, { first }) => queryLoader.load([
'SELECT toID FROM friends WHERE fromID=? LIMIT ?', user.id, first
]).then(rows => rows.map(row => userLoader.load(row.toID)))
}
})
})
is it possible to resolve friends just the way facebook sample does, using your product ? if not what is the alternative ?
Union query cannot be merged in graphql
var personType = graphql.NewObject(graphql.ObjectConfig{
Name: "Person",
Fields: graphql.Fields{
"id": &graphql.Field{
Type: graphql.Int,
},
"name": &graphql.Field{
Type: graphql.String,
},
"age": &graphql.Field{
Type: graphql.Int,
},
"sex": &graphql.Field{
Type: graphql.String,
},
"friends": &graphql.Field{
Type: graphql.NewList(friendType),
Resolve: func(params graphql.ResolveParams) (interface{}, error) {
idQuery := params.Source.(Person).ID
i := strconv.Itoa(int(idQuery))
thunk := friendLoader.Load(context.TODO(), dataloader.StringKey(i))
return thunk()
},
},
},
})
queryString
{personList(ids: [1,2]){id,name,age,sex,friends{name}}}
After querying multiple Persons, the friends of Person will be queried. Thunk will be called in the Resolve function of friends. batchFn will be called multiple times, each passing an id.
When applying no cache policy, i found that duplicate keys passed to batchFuc.
After i query the keys from database and return the results, then will throw error which tell you the size of result and keys is not matched.
Please set semver actual tag:
Correct actual version: v5.0.0
current version v5 not valid for modules
I've found the need to pass composite keys to certain batch load functions.
So far, I've been concatenating key parts together into a string before passing the keys to the load function, and then separating the key parts in the load function.
As you can imagine, this is pretty fragile.
I wondering if it would be plausible to change to a keys interface{}
argument, to allow for passing structured composite keys.
I'm on the fence whether this change would be worth the added complexity for 90% of use-cases.
Is it possible to pass on the query information (specifically the fields requested) to the batch loader function? With my current implementation, the creation of the loader is done outside of the resolver (like other samples) and there is no way to pass the query information to the batch function from within the resolver function.
Hi! I am trying to use this library with graphql-go but i am struggling to get the batching to work when doing list queries that reference foreign key objects. Here's an example schema of what i mean:
type Query {
meetups: [Meetup]!
}
type User {
ID: Int!
Name: String!
}
type Meetup {
ID: Int!
Title: String!
Owner: User!
}
In the MeetupResolver i have this:
func (r *MeetupResolver) Owner() *UserResolver {
thunk := r.loader.Load(r.meetup.OwnerID)
data, _ := thunk()
user, _ := data.(model.User)
return NewUserResolver(&user)
When doing a query to meetups and asking for the Owner of the objects it is still doing a single query to the database for each of the different owners, like this:
[2017-05-16 16:41:13] [0.85ms] SELECT * FROM "users" WHERE ("id" IN ('6'))
[2017-05-16 16:41:13] [1.34ms] SELECT * FROM "users" WHERE ("id" IN ('7'))
Is there a way to batch all the owner ids and then send the query to the database?
> scylla-cloud service api
Error: error creating handler: db.NewMySQLHandler: NewGormDB: gorm.Open: x509: certificate has expired or is not yet valid: current time 2022-08-11T13:27:09+03:00 is after 2022-08-11T07:45:20Z
openssl x509 -dates -noout -in /tmp/ca.cert
notBefore=Jul 12 07:45:20 2022 GMT
notAfter=Aug 11 07:45:20 2022 GMT
ctx, finish := b.tracer.TraceBatch(originalContext, keys)
defer finish(items) // this should be moved
func() {
defer func() {
if r := recover(); r != nil {
panicErr = r
if b.silent {
return
}
const size = 64 << 10
buf := make([]byte, size)
buf = buf[:runtime.Stack(buf, false)]
log.Printf("Dataloader: Panic received in batch function:: %v\n%s", panicErr, buf)
}
}()
items = b.batchFn(ctx, keys)
}()
// defer finish(items)
Is it possible/advisable to use the dataloader library to store data in a persistent, shared cache like Redis?
I've implemented a Redis package that satisfies the Cache interface but I'm hitting a snag when I want to set a value in the cache. The request hangs waiting for the thunk to be resolved.
func (c *Cache) Set(_ context.Context, key dataloader.Key, value dataloader.Thunk) {
v, err := value()
if err != nil { // this line is never reached, the request hangs.
fmt.Println(err)
return
}
if err := c.Client.Set(key.String(), v, 0).Err(); err != nil {
fmt.Println(err)
}
return
}
I'm clearly doing something wrong. Any advice greatly appreciated. Thanks.
Does it make sense to have the Cache::Delete return a boolean?
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/delete
Can we add a LICENSE file to this library? Maybe something like the MIT license? https://tldrlegal.com/license/mit-license
Hello!
I'm noticing that when request context is cancelled during operation, the thunk related to that request panics when it is resolved.
This seems to be due to the thunk not having a value
property when the return
value in this block is hit (in dataloader.go
):
thunk := func() (interface{}, error) {
result.mu.RLock()
resultNotSet := result.value == nil
result.mu.RUnlock()
if resultNotSet {
result.mu.Lock()
if v, ok := <-c; ok {
result.value = v
}
result.mu.Unlock()
}
result.mu.RLock()
defer result.mu.RUnlock()
return result.value.Data, result.value.Error
}
When this thunk is cached, it then results in a panic in subsequent requests until the cache is cleared.
Is there a suggested way to guard against this? I've tried the following in my cache implementation to attempt to guard against caching these 'invalid' thunks, but it results in a deadlock:
func (c *Cache) Set(ctx context.Context, key dataloader.Key, value dataloader.Thunk) {
defer func() {
if r := recover(); r != nil {
c.C.Clear()
}
}()
value()
c.C.SetWithTTL(key.String(), value, 0, time.Duration(c.MaxAge*int(time.Hour)))
}
Dataloaders present a unique challenge with tracing, as batching is one of the few instances where the invariant that a span has a single parent is broken. Many different parent spans end up producing just a single child span which performs the operation for all of them.
To support this, most tracing standards have a concept of Links(Open Telemetry). Where a span can specify that is has links to more than one parent span. This is important when trying to compute metrics like %time spent in operation. Without this, only one of the many parent spans gets its time discounted from the overall operation time as the rest have no child span.
In order for this to work, the batch function, when creating the span for its batch work, must have access to the span (context) of each of the calls to Load()
, or LoadMany()
which is currently not possible given the existing interface / code architecture. I'm curious to hear if this is not a problem for anyone else using the current tracing architecture.
Go will add generic feature on next release(1.18).
This feature can bring a significant benefit like static type checking for user.
First off, thanks for this awesome library!
I've got a question that might be more generalized to dataloader as a whole, but I would love to know if you have any insight. I'm using a postgres backend for my storage, and in one of my resolvers I may pass in a list of IDs that I'm interested in getting. Currently I'm doing a SELECT *
to grab all the fields for those items, even if I may not need all the fields to satisfy the query, but I'm wondering if I can do better.
The naive solution would be to have the leaf node for each resolver use dataloader to load only what was required from the database, but if a query has a lot of fields that are being accessed that probably wont be very efficient, as it would hit the database once for each field I’m accessing.
Is there any way that I can still batch my queries through dataloader, but have the batch function dynamically load only what is specifically being requested for this query? I was thinking maybe I could encode the fields I wanted into the Key
object, but I'm not sure if that would let me properly batch things together, and I imagine that might have consequences on the cache.
Thanks!
could you provide a simple example with any database using the IN clause?
I'm new to dataloader so I'm mostly deducing keys are each id that the resolver provides but I'm not sure.
Thank you
Can I use this with gqlgen?
BigCache:
func (c *BigCache) Get(key string) ([]byte, error) {
// ...
}
func (c *BigCache) Set(key string, entry []byte) error {
// ...
}
But cache in dataloader:
func (c *NoCache) Get(context.Context, Key) (Thunk, bool) { return nil, false }
func (c *NoCache) Set(context.Context, Key, Thunk) { return }
So how do i implement BigCache in dataloader? How convert Thunk to []byte and []byte to Thunk?
Looking forward to your reply!
Any way to sort the Keys
passed to batchFn
?
I got here after reading through the issues in neelance/graphql-go
It is not entirely clear to me how this would solve the n+1 request issue.
I.e. how can you use this to perform less database calls?
To me, it's confusing that BatchFunc
returns *[]Result
. I would expect it to return ([]interface{}, error)
instead. That would be easier to use and closer to the https://github.com/facebook/dataloader implementation where batch functions can also only throw once and not once per result. If my BatchFunc
contains a consolidated DB query (which in most cases it will), I can only get one error from that query and would then have to do the following:
results, err := myDBQuery(keys)
if err != nil {
return []*dataloader.Result{{Error: err}}
}
I think that's not nice because it's not sure that the error has something to do with the first result.
The suggestion would result in the following changes:
-type ThunkMany func() ([]interface{}, []error)
+type ThunkMany func() ([]interface{}, error)
-type BatchFunc func(context.Context, []string) []*Result
+type BatchFunc func(context.Context, []string) ([]interface, error)
Currently, I have to do the following for LoadMany
, which is not nice either:
results, errs := myLoader.LoadMany(ctx, keys)
var err error
for _, e := range errs {
if e != nil {
err = e
break
}
}
if err != nil {
// handle data error
}
Obviously this is a personal preference and not at all technical in nature, but how do you feel about renaming your type Interface interface
to type DataLoader interface
and type Loader struct
to type DefaultDataLoader struct
? I don't have anything concrete to say 1 way or another is best practice or anything like that, but it seems to read more nicely, especially if you're just trying to glance over the godocs of your package to quickly get an understanding of what things are and how they're connected. Just my $0.02.
https://github.com/nicksrandall/dataloader/blob/v4/dataloader.go#L27
No reason the key should be string, every underlying key is interface{}
I've been having trouble wrapping my head around how pagination works with a batched data loader.
scalar Cursor # an opaque string which identifies a resource
query GetUsers($first: Int, $after: Cursor) {
users(first: $first, after: $after) {
name
}
}
Do the same pagination parameters $first
and $after
get passed into every possible batch? Is there some page accounting that has to happen a layer above the data loader layer?
@nicksrandall I'd love to get your thoughts.
go get
command can only get V4 package, not V5.
But this can help: go get -u gopkg.in/nicksrandall/dataloader.v5
, then
import "gopkg.in/nicksrandall/dataloader.v5"
Hi,
Int the README.md, I found:
This implementation contains a very basic cache that is intended only to be used for short lived DataLoaders (i.e. DataLoaders that only exist for the life of **an http request**). You may use your own implementation if you want.
But when I test, I found that it cache for many requests. So I have the questions:
Thanks,
Linh
Right now I can technically use this library with projects using go modules, but because it doesn't have a go module initialized, we see:
github.com/graph-gophers/dataloader v5.0.0+incompatible
I'm not sure if all the semver / module guarantees are properly respected for non-module-based projects.
Could I add go.mod / go.sum files to the repo? Happy to open a PR.
I could also fork the repo and keep them in my fork if you prefer.
I am trying to use dataloader with my go application that runs on google's app engine. After I load the dataloader lib into my application, I got a number of errors from golang compiler. Any idea or tips on how to solve the issue? Thanks a lot!
dataloader.go:183: cannot use NewCache() (type *InMemoryCache) as type Cache in assignment:
*InMemoryCache does not implement Cache (wrong type for Delete method)
have Delete(context.Context, interface {}) bool
want Delete(context.Context, Key) bool
inMemoryCache.go:24: cannot use items (type map[interface {}]Thunk) as type map[string]Thunk in field value
inMemoryCache.go:31: key.Key undefined (type Key has no field or method Key)
inMemoryCache.go:41: key.Key undefined (type Key has no field or method Key)
inMemoryCache.go:51: cannot use key (type interface {}) as type Key in argument to c.Get:
interface {} does not implement Key (missing Raw method)
inMemoryCache.go:54: key.Key undefined (type interface {} is interface with no methods)
Why does keys
have to be a []string
? I am working with https://labix.org/mgo where all IDs are of type bson.ObjectId
. This makes using dataloader
somewhat impractical because I have to convert all bson.ObjectId
s into string
s before piping them into the Load
function of dataloader
. In there, I have to convert them back to bson.ObjectId
s to execute the MongoDB queries. If the Load
function would except the keys as []interface{}
instead of []string
, that would make my life a lot easier because I could directly pipe the bson.ObjectId
s into it.
The context.Context
package works the same way. There, a key can also be interface{}
.
I'm building a project with graphql-go and need a dataloader implementation. After a bit research, I found your library and seems like you build this with graphql-go support in mind. But I don't understand how to use 2 libraries together. Can you help me with some sample code or ideas how they work together. Do I need to modify graphql-go source code to use this library?
Thank you.
I'm wondering if you have any examples for the usage Prime()
.
I'd like to write loaders that can load a resource by either the id
or some other field, and then prime the cache with the result for both types of loader.
type ResourceLoader struct {
client myservice.Client
}
func (l *ResourceLoader) LoadByIDs(ctx context.Context) dataloader.BatchFunc {
return func(ids []string) []*dataloader.Result {
var results []*dataLoader.Result
resourcesByID, err := l.client.GetAllByIDs(ctx, ids) // returns a map[string]Resource
if err != nil {
// truncated: populate `results` with `len(ids)` errors
return results
}
for _, id := range ids {
result := &dataloader.Result{}
if r, ok := resourcesByName[id]; ok {
result.Data = r
// TODO: Prime dataloader cache with this result for key `r.Name`
} else {
result.Error = fmt.Errorf("Unable to find resource with id %q", id)
}
results = append(results, result)
}
return results
}
}
func (l *ResourceLoader) LoadByNames(ctx context.Context) dataloader.BatchFunc {
return func(names []string) []*dataloader.Result {
var results []*dataLoader.Result
resourcesByName, err := l.client.GetAllByNames(ctx, names) // returns a map[string]Resource
if err != nil {
// truncated: populate `results` with `len(names)` errors
return results
}
for _, name := range names {
result := &dataloader.Result{}
if r, ok := resourcesByName[name]; ok {
result.Data = r
// TODO: Prime dataloader cache with this result for key `r.ID`
} else {
result.Error = fmt.Errorf("Unable to find resource with name %q", name)
}
results = append(results, result)
}
return results
}
}
In this example, ResourceLoader
is attached to the request context via dataloader.NewBatchedLoader(...)
.
when I run go test -v -race
, it occasionally hangs. I have turned it off until I can figure out why. Any help is appreciated!
Currently I pass in a context.Context
via closure. I'm curious if anyone has a better way of passing the context to the batch function.
func Attach(ctx context.Context) dataloader.BatchFunc {
return func(keys []string) []*dataloader.Result {
// the batch function, now with 100% more context
}
}
Are there any downsides to changing the signature of BatchFunc
to include the context as an argument?
- type BatchFunc func([]string) []*Result
+ type BatchFunc func(context.Context, []string) []*Result
Tests fail to pass when specifying the race detector to run.
go test -v -cover -race
results in
Found 13 data race(s)
exit status 66
FAIL github.com/nicksrandall/dataloader 1.059s
if the tests don't deadlock.
It may be good to add this option to the TravisCI runner.
Found when running tests like so:
$ go test -count=1000 race .
==================
WARNING: DATA RACE
Read at 0x00c000222008 by goroutine 22:
github.com/graph-gophers/dataloader.(*InMemoryCache).Get()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/inMemoryCache_go19.go:32 +0x56
github.com/graph-gophers/dataloader.TestLoader.func16()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/dataloader_test.go:337 +0x2ee
testing.tRunner()
/usr/local/Cellar/go/1.12.4/libexec/src/testing/testing.go:865 +0x163
Previous write at 0x00c000222008 by goroutine 77:
github.com/graph-gophers/dataloader.(*InMemoryCache).Clear()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/inMemoryCache_go19.go:52 +0xa1
github.com/graph-gophers/dataloader.(*Loader).sleeper()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/dataloader.go:356 +0x260
Goroutine 22 (running) created at:
testing.(*T).Run()
/usr/local/Cellar/go/1.12.4/libexec/src/testing/testing.go:916 +0x65a
github.com/graph-gophers/dataloader.TestLoader()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/dataloader_test.go:314 +0x31e
testing.tRunner()
/usr/local/Cellar/go/1.12.4/libexec/src/testing/testing.go:865 +0x163
Goroutine 77 (finished) created at:
github.com/graph-gophers/dataloader.(*Loader).Load()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/dataloader.go:242 +0xa58
github.com/graph-gophers/dataloader.TestLoader.func16()
/Users/tonyghita/go/src/github.com/graph-gophers/dataloader/dataloader_test.go:318 +0xc5
testing.tRunner()
/usr/local/Cellar/go/1.12.4/libexec/src/testing/testing.go:865 +0x163
==================
The cause seems to be an improper deletion of the sync.Map
elements in Clear()
.
I'm thinking about making a small refactor that will change the Thunk
and ThunkMany
types to return interface{}, and error (or []interface, []error in the case of ThunkMany). I feel like that would be a little more idiomatic.
@tonyghita What do you think?
vendor/github.com/nicksrandall/dataloader/inMemoryCache.go:51: not enough arguments in call to c.Get
have (interface {})
want ("context".Context, interface {})
Context not passed to the get called by delete method.
We have two modules, one is the order and the other is the sub order. When querying the order list, you don't need to worry about the "n + 1" problem, but if you check the sub order according to the order list, there will be problems. For example, if a order corresponds to multiple sub orders, you need to query the sub order details once every time.
Now using dataloader cannot solve this 1-to-many relationship, because the data will be disordered.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.