Comments (13)
Here are benchmarks against the same code for different versions of graphql-go
and grapql-js
. I've repeated the ones from above today as well to try and keep things consistent.
I ran each test 5 times and the full results can be found here but I've included just the best for each below. They don't vary enough between tests to worry about.
Things to note:
- Wow! Your
sogko/0.4.18
branch has absolutely smashed it. Significantly faster than the fastestgraphql-js
test I've seen and with usually far fewer errors. graphql-js 0.4.18
is faster thangrapql-js 0.4.3
graphql-go master
is still slower than bothgraphql-js
versions.
Overall your new branch is showing incredible performance - totally was't expecting this. Amazing!
Versions
- go:
go1.6 darwin/amd64
- node.js:
v5.5.0
- express:
4.13.4
Specs
- MacBook Pro: 13-inch, Early 2011
- Processor: 2.3Ghz Intel i5 (quadcore)
- Memory: 8GB
Benchmarks
graphql-js 0.4.3
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3002/graphql?query={hello}"
Running 30s test @ http://localhost:3002/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 206.25ms 33.05ms 545.89ms 80.82%
Req/Sec 99.13 90.37 455.00 83.64%
34701 requests in 30.09s, 7.71MB read
Socket errors: connect 157, read 38, write 0, timeout 0
Requests/sec: 1153.33
Transfer/sec: 262.43KB
graphql-go master
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 213.99ms 216.00ms 2.17s 85.37%
Req/Sec 137.53 67.52 350.00 65.51%
35429 requests in 30.10s, 4.87MB read
Socket errors: connect 157, read 20, write 1, timeout 0
Requests/sec: 1177.02
Transfer/sec: 165.52KB
graphql-js 0.4.18
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3002/graphql?query={hello}"
Running 30s test @ http://localhost:3002/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 171.13ms 32.39ms 789.92ms 90.02%
Req/Sec 119.80 80.91 333.00 66.08%
41475 requests in 30.10s, 9.22MB read
Socket errors: connect 157, read 172, write 5, timeout 0
Requests/sec: 1377.75
Transfer/sec: 313.49KB
graphql-go sogko/0.4.18
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 45.52ms 42.43ms 508.00ms 70.84%
Req/Sec 613.51 302.63 1.36k 68.96%
164704 requests in 30.10s, 22.62MB read
Socket errors: connect 157, read 128, write 0, timeout 0
Requests/sec: 5472.34
Transfer/sec: 769.55KB
from graphql.
I did a benchmark without the http overhead and using the go test tool and this is what i get,
BenchmarkGoGraphQLMaster-4 10000 230846 ns/op 29209 B/op 543 allocs/op
BenchmarkPlaylyfeGraphQLMaster-4 50000 27647 ns/op 3269 B/op 61 allocs/op
Here's the code,
package graphql_test
import (
"testing"
"github.com/graphql-go/graphql"
pgql "github.com/playlyfe/go-graphql"
)
var schema, _ = graphql.NewSchema(
graphql.SchemaConfig{
Query: graphql.NewObject(
graphql.ObjectConfig{
Name: "RootQueryType",
Fields: graphql.Fields{
"hello": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "world", nil
},
},
},
}),
},
)
func BenchmarkGoGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
graphql.Do(graphql.Params{
Schema: schema,
RequestString: "{hello}",
})
}
}
var schema2 = `
type RootQueryType {
hello: String
}
`
var resolvers = map[string]interface{}{
"RootQueryType/hello": func(params *pgql.ResolveParams) (interface{}, error) {
return "world", nil
},
}
var executor, _ = pgql.NewExecutor(schema2, "RootQueryType", "", resolvers)
func BenchmarkPlaylyfeGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor.Execute(context, "{hello}", variables, "")
}
}
from graphql.
I have checked the performance too and found that the current implementation of the Lexer produces a lot of garbage and slows everything by orders of magnitudes. For details and a possible fix see here: #137
from graphql.
There is a graphql lib, which is based on libgraphqlparser, https://github.com/tallstreet/graphql, probably not active anymore. One concern I have is using it in sandboxed cloud services like google app engine, which (used to) restricts the use of cgo
. Please consider other drawbacks compared to pure go version.
from graphql.
From @bbuck: One concern I have is using it (libgraphqlparser) in sandboxed cloud services like google app engine, which (used to) restricts the use of cgo. Please consider other drawbacks compared to pure go version.
@bbuck That is an interesting insight, we have to keep this in mind (to use or not to use cgo) and figure out how to go about doing this.
from graphql.
Hi @mleonard87
Thanks for taking time to look into the performance of graphql-go
, this is excellent! 😄👍
I'm glad that someone is taking up the challenge to figure out weak points in the library, it helps to steer the direction of the development.
The code for both graphql-go
and express-graphql
seems fair, at first glance 👍
Edit: Probably it would help to state the version of graphql-js
that was used for the benchmark. Currently graphql-go
is equivalent to v0.4.3
of graphql-js
(latest v0.4.18
).
Other information such as version of NodeJS, Go, express would probably would be nice as well.
Regarding areas of improvements, I can offer some notes that I already have that would help with the effort in improving and optimising the performance.
- The parser can be replaced with the much faster C++ lib libgraphqlparser. It might require some work with integrating the
libgraphqlparser
structs with existinggraphql-go
structs, but it can be done. - You are absolutely right that the
visitor
andvalidator
are currently non-performant. This partly will be addressed in the PR #117, hopefully. (still WIP, I need to find more time to work on this, its about 30% done).validator
will be able to run validation concurrently vs sequentially at the momentvisitor
will be able to visit nodes in parallel
Both improvements to the visitor
and validator
are already in that PR branch, perhaps could you try to run the benchmarks on that branch to see if there are any improvements, if any?
Again, we appreciate that work you put into this, we welcome your contribution very much!
Cheers
from graphql.
Yeah, absolutely, I can run these tests again on your branch later when I get home. I'll also clarify my express-graphql
versions and benchmark against like-for-like. I believe this was against v0.4.18
,
Also, has there been any discussion on caching the results of the validate/parse? I think that in a lot of applications the same query my be executed often. For example, in a Todo app the main page might fetch a list of all the Todos and the graphql query itself would be the same each time even if the results are different. Have you seen this in any other graphql implementations?
from graphql.
Hi @mleonard87
Thanks for running more benchmark tests for the different configurations, really appreciate the time you put into this 👍
Woah, those results seems really promising, I'm quite surprised myself lol.
Now this is making me wonder how does this library fare against others on other platforms (graphql-ruby, sangria etc)
In the future, we could possibly have a separate repo within graphql-go
org for benchmark results and the code used for different platforms, probably similar to https://github.com/julienschmidt/go-http-routing-benchmark
Probably something like github.com/graphql-go/benchmarks
/cc @chris-ramon
from graphql.
A benchmark repo is not a bad idea. Probably need something more complex than my very trivial hello world test case.
I had wondered myself about other libraries. I might them a go when I get some spare time.
If these times can be maintained once PR #117 is complete then I think this can be a blazingly fast library.
from graphql.
@Matthias247 Nice tip got to say its best to use bytes.Buffer in golang then to use strings since you can pool and reuse them. We use them a lot and even fast frameworks like https://github.com/valyala/fasthttp and https://github.com/labstack/echo use them to get the highest speed.
Anyway I ran the benchmark on my machine against our implementation of graphql,
and here it is,
graphql-go master
wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 134.97ms 163.47ms 1.85s 86.12%
Req/Sec 372.46 236.09 1.58k 70.99%
133607 requests in 30.05s, 18.35MB read
Requests/sec: 4445.99
Transfer/sec: 625.22KB
playlyfe/go-graphql master
wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 34.89ms 43.72ms 518.00ms 87.58%
Req/Sec 1.44k 0.90k 6.10k 81.35%
514095 requests in 30.05s, 70.60MB read
Requests/sec: 17108.13
Transfer/sec: 2.35MB
And BTW shouldn't these be in benchmark tests in the library and not like this then we could figure out allocs/sec and ops/s also.
And here's the code,
package main
import (
"encoding/json"
"fmt"
"net/http"
"github.com/playlyfe/go-graphql"
)
func main() {
schema := `
type RootQueryType {
hello: String
}
`
resolvers := map[string]interface{}{}
resolvers["RootQueryType/hello"] = func(params *graphql.ResolveParams) (interface{}, error) {
return "world", nil
}
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor, err := graphql.NewExecutor(schema, "RootQueryType", "", resolvers)
if err != nil {
panic(err)
}
http.HandleFunc("/graphql", func(w http.ResponseWriter, r *http.Request) {
result, err := executor.Execute(context, r.URL.Query()["query"][0], variables, "")
if err != nil {
panic(err)
}
json.NewEncoder(w).Encode(result)
})
fmt.Println("Benchmark app listening on port 3003!")
http.ListenAndServe(":3003", nil)
}
from graphql.
thanks for @pyros2097, I add graph-gophers/graphql-go in benchnark lint. See the repo golang-graphql-benchmark
result:
BenchmarkGoGraphQLMaster-4 20000 84131 ns/op 27254 B/op 489 allocs/op
BenchmarkPlaylyfeGraphQLMaster-4 200000 7531 ns/op 2919 B/op 59 allocs/op
BenchmarkGophersGraphQLMaster-4 200000 5041 ns/op 3909 B/op 39 allocs/op
code:
package graphql_test
import (
"context"
"testing"
ggql "github.com/graph-gophers/graphql-go"
"github.com/graphql-go/graphql"
pgql "github.com/playlyfe/go-graphql"
)
var schema, _ = graphql.NewSchema(
graphql.SchemaConfig{
Query: graphql.NewObject(
graphql.ObjectConfig{
Name: "RootQueryType",
Fields: graphql.Fields{
"hello": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "world", nil
},
},
},
}),
},
)
func BenchmarkGoGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
graphql.Do(graphql.Params{
Schema: schema,
RequestString: "{hello}",
})
}
}
var schema2 = `
type RootQueryType {
hello: String
}
`
var resolvers = map[string]interface{}{
"RootQueryType/hello": func(params *pgql.ResolveParams) (interface{}, error) {
return "world", nil
},
}
var executor, _ = pgql.NewExecutor(schema2, "RootQueryType", "", resolvers)
func BenchmarkPlaylyfeGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor.Execute(context, "{hello}", variables, "")
}
}
type helloWorldResolver1 struct{}
func (r *helloWorldResolver1) Hello() string {
return "world"
}
var schema3 = ggql.MustParseSchema(`
schema {
query: Query
}
type Query {
hello: String!
}
`, &helloWorldResolver1{})
func BenchmarkGophersGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
ctx := context.Background()
variables := map[string]interface{}{}
schema3.Exec(ctx, "{hello}", "", variables)
}
}
from graphql.
Hi, so we are using this library at work and were looking at a certain optimisation.
In graphql-go, the default limit of the maximum number of resolvers per request allowed to run in parallel was 10, which now is increased and passed as an option during the initialisation of schema. Performance improvements and other impacts has to be find out by load testing.
We've increased maximum number of resolvers per request to 50 as of now. Quick question though, was there a specific reason why graphql-go
limit was set to 10? Or was it a limit that was started off with and hasn't been experimented with?
Thank you again for creating an amazing library for people to work with.
from graphql.
Hi, so we are using this library at work and were looking at a certain optimisation.
In graphql-go, the default limit of the maximum number of resolvers per request allowed to run in parallel was 10, which now is increased and passed as an option during the initialisation of schema. Performance improvements and other impacts has to be find out by load testing.
We've increased maximum number of resolvers per request to 50 as of now. Quick question though, was there a specific reason why
graphql-go
limit was set to 10? Or was it a limit that was started off with and hasn't been experimented with?Thank you again for creating an amazing library for people to work with.
Hi @salman-bhai, we don't actually set a maximum number of resolvers per request to run in parallel
within graphql-go/graphql
we actually don't have a limit.
Perhaps are you referring to the limitation of a different library: graph-gophers/graphql-go
?
from graphql.
Related Issues (20)
- Cannot parse GitHub public schema v4
- Create graphql schema from string HOT 3
- The code generator does not take into account naming conflicts HOT 1
- How to take metro in graphql out of the solve?
- Is this project still mantained? HOT 2
- New release?
- Potential goroutine leak in TestContextDeadline HOT 1
- Enums with trailing white space cause error.
- Playground
- make union input type ? it just have output union type
- With fiber HOT 1
- Printing GraphQL documents is slow HOT 1
- Bug with underscore and same name in keys from json response
- MongoDB _id field
- Printer returns invalid SDL if Block String comment contains double-quote
- When using custom scalar types, it's crucial to provide error feedback to users when issues arise with their submitted data. However, triggering exceptions within the ParseValue and ParseLiteral methods can lead to program crashes when using the graphql.Do method. This prevents the necessary error messages from being delivered to users.
- String type no longer recongised HOT 1
- Is it possible to go from `*graphql.Schema` to an ast node? HOT 1
- Error handling HOT 1
- Disabling Field Suggestions
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from graphql.