Giter Club home page Giter Club logo

graphql-core's People

Contributors

alecrosenbaum avatar andrew-humu avatar astraluma avatar bennyweise avatar bkad avatar changeling avatar charmasaur avatar chenrui333 avatar cito avatar conao3 avatar coretl avatar corrosivekid avatar dfee avatar evanmays avatar fedirz avatar fugal-dy avatar hoefling avatar jkimbo avatar kristjanvalur avatar ktosiek avatar mawa avatar mlorenzana avatar mrtc0 avatar mvanlonden avatar nawatts avatar patrys avatar rafalp avatar syrusakbary avatar wuyuanyi135 avatar yezz123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphql-core's Issues

Premature removal of introspection_query

In GraphQL-core 3.0.0, introspection_query has been already removed, although 3.0.0 claims to be compatible with GraphQL.js 14. This should only be removed when we release a version compatible with GraphQL.js 15.

Obtaining field directives at execution time

Given the following SDL example:

directive @example(
  arg: String
) on FIELD_DEFINITION

type Query {
  field: String @example(arg: "some value")
}

At execution time (I'm assuming from within a resolver or middleware), when resolving field, what would be the proper way to know the example directive is applied to it, and to get "some value"?

I've checked the doc and found graphql.get_directive_values() but can't figure how to use it.
GraphQLResolveInfo.field_nodes is a list with only one field node, which has an empty directives property.

I know providing a framework to define directives is not a goal of this lib, but I just need a bare bones way to access this data.

Edit:
After checking the internals, I get it better: GraphQLResolveInfo.field_nodes gives access to the selection nodes, and these only carry client query directives.

To access the field definition within the server schema, I have to do the following:

field_def = info.parent_type.fields[info.field_name]
directive = info.schema.get_directive("example")
graphql.get_directive_values(directive, field_def)  # {"arg": "some value"}

Let me know if I'm doing something wrong! (and feel free to close this issue. I feel like having some doc about it might make sense.)

Provide graphql-tools functionality in Python

Creating executable schemas from SDL with GraphQL.js/graphql-core-next is possible, but grafting resolvers, custom scalars and enums manually into the schema after creating it with build_schema is cumbersome (see e.g. #20) and has a smell of monkey-patching.

The makeExecutableSchema function of graphql-tools provides a better solution.

We should consider porting this functionality to Python as well, either as a separate packare or as a subpackage of graphql-core-next.

Consider package name change

This may be controversial, but I'd like to propose changing the name of this package from graphql to something like graphql_next.

I'm working on a project that both exposes a GraphQL API, and does complex GraphQL introspection. Exposing an API is most easily done with Graphene, particularly given the packages like graphene-django, but it depend on graphql-core which as we know is fairly limited. I think it's reasonable to allow for the packages to be installed alongside each other.

Why

  • Graphene operating at a higher level of abstraction, and with a developed package ecosystem means there is a need to have both this and graphql-core installed alongside each other.
  • This package is not API compatible with graphql-core.
  • PEP 423 suggests that top-level namespacing is about ownership. In this case "ownership" is arguably with https://github.com/graphql-python/ โ€“ this would suggest that installing under graphql_python.next, or graphql.next may be a good approach.
  • PEP 423 suggests the following (emphasis mine), with a benefit being that packages do not collide. "Distribute only one package (or only one module) per project, and use package (or module) name as project name."
  • graphql-core is no longer being actively developed. While this may change in the future, it means the maintainers are unlikely to change the name of that package.
  • graphql-core came first, and it is recommended by PEP 423 that existing package names are not chosen by new packages.
  • graphql-core-next mostly refers to itself by names along those lines, not by the name "graphql". This could be a chance to add consistency, and at the very least would be less confusing for newcomers than finding the package is in fact called "graphql".

How

PEP 423 suggests a reasonable series of steps, that don't look like they would required a huge amount of work.

https://www.python.org/dev/peps/pep-0423/#how-to-rename-a-project

Notes

I've referenced PEP 423 here a lot. It should be noted that this PEP has not been accepted, although it has received discussion on the Python-Dev mailing list, doesn't appear to have many people arguing against it (other than for broken links and typos), and has also been featured in conference talks. Above all though, the document reads to me as a summary of how the Python ecosystem already works for the most part, and a distillation of already accepted practices and the reasons for them, rather than a proposed change. For this reason I believe it's a good starting point for discussion.

I have also suggested several alternative names for the package here. I do not feel strongly about any of them. I think graphql_core_next, graphql_next, graphql.core, or graphql2, would all be acceptable names, and I am very happy to leave the decision between these or other new options entirely to the maintainers.

Exception instance returned from resolver is raised within executor instead of being passed down for further resolution.

Today when experimenting with 3rd party library for data validation (https://github.com/samuelcolvin/pydantic) I've noticed that when I take map of validation errors from it (which is dict of lists of ValueError and TypeError subclasses instances), I need to implement extra step to convert those errors to something else because query executor includes check for isinstance(result, Exception). This check makes it raise returned exception instance, effectively short-circuiting further resolution:

https://github.com/graphql-python/graphql-core-next/blob/master/src/graphql/execution/execute.py#L731

The fix for issue was considerably simple to come up with: just write util that converts those errors to a dict as they are included in my result's validation_errors key, but I feel such boilerplate should be unnecessary:

try:
    ... run pydantic validation here
except (PydanticTypeError, PydanticValueError) as error:
    return {"validation_errors": flatten_validation_error(error)}

Is this implementation a result of something in spec, or mechanism used to keep other feature's (eg. error propagation) code simple? I think we should consider supporting this use-case. Considerable number of libraries use exceptions for messaging (eg. Django with its ValidationError and bunch of core.exceptions.*)

The root_value of ExecutionContext.execute_operation should not be None

Usually I want to use the custom method binded to GraphQLObjectType, such as:

# use strawberry for example
import strawberry


@strawberry.type
class User:
    name: str
    age: int


@strawberry.type
class Query:
    @strawberry.field
    def user(self, info) -> User:
        return self.get_user()

    def get_user(self):
        return User(name="Patrick", age=100)


schema = strawberry.Schema(query=Query)

Unfortunately, the self would always be None cause the root_value in ExecutionContext.execute_operation would be setted to None if it is the root node Query. I think modifying it as below is fine:

def execute_operation(
        self, operation: OperationDefinitionNode, root_value: Any
    ) -> Optional[AwaitableOrValue[Any]]:
        """Execute an operation.

        Implements the "Evaluating operations" section of the spec.
        """
        type_ = get_operation_root_type(self.schema, operation)
        if not roo_value:
            root_value = type_

Then we can use the custom method of GraphQLObjectType. And it not leads any problem I think.

Provide an obvious way to register custom scalar type implementations

I was following "create schema from SDL" approach in https://cito.github.io/blog/shakespeare-with-graphql/ and wanted to use custom scalar types. Looks like in GraphQL.js custom scalar type implementation can be set by simply assigning in type map (https://stackoverflow.com/questions/47824603/graphql-custom-scalar-definition-without-graphql-tools), so equivalent here would be something like this

schema_src = """
scalar DateTime
...
"""
schema = build_schema(schema_src)
schema.type_map["DateTime] = myscalars.DateTime

This doesn't work though as schema.type_map is not consulted when serializing output types or parsing arguments. The following workaround using extend_schema seems to produce the desired effect of assigning custom scalar type implementation to a scalar name declared in schema and having it used when serializing and parsing:

import typing
import graphql.language
from graphql import GraphQLScalarType
from graphql import GraphQLSchema
from graphql.utilities import extend_schema

def register_scalar_types(schema: GraphQLSchema, types: typing.List[GraphQLScalarType]):
    for scalar_type in types:
        schema.type_map[scalar_type.name] = scalar_type

    #using a name that already exists in the schema is an error
    #so need to make something up
    dummy_scalar_name = "_extension_dummy"
    extended_schema = extend_schema(
        schema, graphql.language.parse("scalar %s" % dummy_scalar_name)
    )
    #this scalar we extended schema with is not used though
    del extended_schema.type_map[dummy_scalar_name]
    return extended_schema

and then use it like this:

schema = register_scalar_types(schema, [myscalars.DateTime])

Surely this is a hack and there has to be a better way?

Trailing comments in query causes "string index out of range"

I'm using graphiQL for testing my API where it is fairly common practice to comment out a query and write a new one.

{info}

Will run fine.

{info}
#

Will fail with "string index out of range" on line 268 of lexer.py (char=body[position])

I could be wrong - but shouldn't that be a >= rather than >?

Roadmap/Status update?

What's the status of graphql-core-next vis-a-vis the graphene tools? Is there a Roadmap or other document tracking compatibility with, say, graphene-django?

Or are they already compatible and I'm missing that in the docs of those tools or graphql-core-next?

It would be great if this was addressed prominently in the README. Assuming it isn't already and I'm just flat out missing it.

directives as tuple?

https://github.com/graphql-python/graphql-core-next/blob/b0fa058bacf3913d0af9a75289e241f343c3842a/graphql/type/directives.py#L169

Hi, I'm new to GraphQL and have a quick question. I want to add a custom directive to the default directives including skip and included.

schema = GraphQLSchema(query, directives=specified_directives + [custom_directives])

However, the following TypeError occurs:
can only concatenate tuple (not "list") to tuple

I noticed that graph-core returns a list of these predefined directives, but graph-core-next returns a tuple. Is there a reason for this?

Question: how to perform an operation before the futures will be resolved

Hi,

awesome project, going Python 3.6+ and full async is a really big improvement!

For context: I'm creating a GraphQL binding to Vaex, a Python DataFrame library for large datasets, mostly exposing the aggregations and groupby/binby operations. The aggregation operations can be done async (they will give a Promise, that I can wrap with a Future for compatibility with this library). All operations will then be performed in one pass over the data, and thus all promises/futures will be resolved after that operation. This 'one pass over the data' is required for proper performance for larger than memory datasets.

I thus need to call vaex to compute the aggregation before your library tries to resolve the futures, but after all graph-ql operations are executed. In the previous version of this library, I could do this by calling my computation function in https://github.com/graphql-python/graphql-core/blob/bbbe880673e2574bc418b639f43968f96364873b/graphql/execution/executors/asyncio.py#L55 (by inherting and overriding that method).

However, I cannot find an easy way to do this with the current library. The only way I could do it was to pass my function as the context, and call it just before this line : https://github.com/graphql-python/graphql-core-next/blob/0fb5b81fc0a1909bfe63df31657bc4de631676cf/src/graphql/execution/execute.py#L356

Is there a proper way of achieving this?

cheers,

Maarten

Extend schema crashes on unbound methods

Hello,

Context: Windows 10, ariadne==0.5, graphql-core-next==1.1.0

Recently upgraded to the latest version of graphql-core-next, and I'm getting the following exception when using extend_schema:

  File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 335, in extend_schema
    type_map[existing_type_name] = extend_named_type(existing_type)
  File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 150, in extend_named_type
    return extend_scalar_type(type_)
  File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 225, in extend_scalar_type
    kwargs = type_.to_kwargs()
  File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\type\definition.py", line 397, in to_kwargs
    if getattr(self.parse_literal, "__func__")
AttributeError: 'function' object has no attribute '__func__'

This new version tries to get the __func__ attribute of the literal parser for a custom scalar, which assumes that the parser is a bound method. There are many libraries, incl. ariande, that attach such parsers (and resolvers) after the scalars have been initialised. Therefore, these newly attached methods are no longer bound. I see no reason for this constraint, so could you please fix?

Many thanks.

Exceptions in resolver are caught (no traceback)

Version:
GraphQL-core-next 1.0.1
Python 3.7.1

Current Behaviour:

Exceptions in resolvers are caught by default and the traceback of the original_error doesn't help debugging.

Traceback output of log.exception('', exc_info=result.errors[0].original_error)

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/graphql/execution/execute.py", line 664, in complete_value_catching_error
    return_type, field_nodes, info, path, result
  File "/usr/local/lib/python3.6/site-packages/graphql/execution/execute.py", line 731, in complete_value
    raise result
graphql.error.graphql_error.GraphQLError: My exception

Expected Behaviour:

  1. There is a flag raise_exception which switches if exceptions are caught or if they are raised out of the library resolver.

  2. The traceback of the original_errors is meaningful: leads to the source line where the exception occurred.

Empty String as As input variable

Hello,
not sure if it's an issue or a question, but here it is,
i try to make a field with a non nullable String as an input field. And graphql-core-next is converting it to "None" which is not what i expected.
It seems to me, reading the spec, that it's allowed to pass "" as string argument and should be considered not null ... did i miss something ?

here is a reproducible code to show off the issue :

import asyncio
from graphql import graphql, parse, build_ast_schema


async def resolve_hello(obj, info, query):
    print(query)
    return f'world ==>{query}<=='

schema = build_ast_schema(parse("""
type Query {
    hello(query: String!): String
}
"""))

schema.get_type("Query").fields["hello"].resolve = resolve_hello

async def main():
    query = '{ hello(query: "") }'
    print('Fetching the result...')
    result = await graphql(schema, query)
    print(result)


loop = asyncio.get_event_loop()
try:
    loop.run_until_complete(main())
finally:
    loop.close()

thanks for the help ๐Ÿ‘

Support "out_name" in input types

GraphQL-Core supports an out_name for input types which is used by Graphene for passing parameters with transformed names to Python (because Python likes snake_case instead of camelCase).

GraphQL-Core-Next should support a similar functionality that can be used by Graphene.

Reconsider hash method of AST nodes

The nodes in "language.ast" should probably not be hashable, because they are mutable and currently equal values have different hashes (see also graphql-python/graphql-core/issues/252).

Initial Update

The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.

Some type import paths deprecated in 3.7

when running pytest with warnings

~~~/lib/python3.7/site-packages/promise/promise_list.py:2
  ~~~/lib/python3.7/site-packages/promise/promise_list.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    from collections import Iterable

~~~/lib/python3.7/site-packages/graphql/type/directives.py:55
  ~~~/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    args, collections.Mapping

~~~/lib/python3.7/site-packages/graphql/type/typemap.py:1
  ~~~/lib/python3.7/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    from collections import OrderedDict, Sequence, defaultdict

~~~/lib/python3.7/site-packages/graphql_server/__init__.py:2
  ~~~/lib/python3.7/site-packages/graphql_server/__init__.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    from collections import namedtuple, MutableMapping

-- Docs: https://docs.pytest.org/en/latest/warnings.html

MiddlewareManager only decorates the first field encountered

get_middleware_resolvers is implemented as a generator:

https://github.com/graphql-python/graphql-core-next/blob/f6b078bddae4dd8a48ebf878a6fd5fbbac52bd1b/graphql/execution/middleware.py#L56-L58

The generator object is then assigned to self._middleware_resolvers:

https://github.com/graphql-python/graphql-core-next/blob/f6b078bddae4dd8a48ebf878a6fd5fbbac52bd1b/graphql/execution/middleware.py#L30-L32

As the generator is not consumed and unrolled into a list at this point, it is exhausted during the reduce() call of the first field construction:

https://github.com/graphql-python/graphql-core-next/blob/f6b078bddae4dd8a48ebf878a6fd5fbbac52bd1b/graphql/execution/middleware.py#L46-L50

As the generator is now exhausted, all other calls to reduce() will immediately receive StopIteration so no other field is wrapped.

ExecutionResult missing extensions field

GraphQL spec section 7.1 describes a 3rd response field called extensions that is intended to be used for custom data in addition to payload data and error responses. This is often used for metadata related to the query response such as performance tracing. Apollo GraphQL implements this on their server via an extensions middleware. We should probably follow a similar pattern here but we will need to support passing the extensions data to the client in the core in order to support middleware like that.

https://graphql.github.io/graphql-spec/June2018/#sec-Response-Format

The response map may also contain an entry with key extensions. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents.

GraphQLError is unhashable

It seems that the logging library in python assumes that exceptions are hashable, in order to be logged. It'd be great if we could treat GraphQL errors the same way as other builtin exceptions.

See also, a similar issue in the schematics project:

schematics/schematics#452

SourceLocation should be serialized as object

SourceLocation, as a NamedTuple, is serialized by json.dumps() to an array instead of an object. Graphql.js keeps SourceLocation as and object, so it just works for them.

Reproduction:

from graphql import SourceLocation, GraphQLError, format_error, Source
import json

print(json.dumps(format_error(GraphQLError(message="test", source=Source('{ test }'), positions=[2]))))

Expected result: {"message": "test", "locations": [{"line": 1, "column": 3}], "path": null}
Actual result: {"message": "test", "locations": [[1, 3]], "path": null}

Should SourceLocation be changed to something that serializes correctly? Or would it make more sense to convert it to dict in format_errors?

Performance of isawaitable

I've run a very simplistic benchmark, just returning a long list of single-field objects.
It seems graphql-core-next is 2.5x slower than graphql-core: https://gist.github.com/ktosiek/849e8c7de8852c2df1df5af8ac193287

Looking at flamegraphs, I see isawaitable is used a lot, and it's a pretty slow function. Would it be possible to pass raw results around more? It seems resolve_field_value_or_error and complete_value_catching_error are the main offenders here.

Got invalid value wrong interpretation

Code in graphql-core
https://github.com/graphql-python/graphql-core/blob/master/graphql/execution/values.py#L71-L76

Code in graphql-core-next:
https://github.com/graphql-python/graphql-core-next/blob/master/graphql/execution/values.py#L96-L99

Graphql-core next leaks the inner Python representation of an object.

In general, after reviewing a lot of tests and code, there is a lot of usage of repr when it should be used just for debugging, not for uniform error messages.

Allow passing rules directly into the graphql function

I wanted to add some custom validation rules in addition to the default graphql.specified_rules. Unless I'm mistaken, the only way to do so is to manually call validate with the new rules, which means that I basically have to duplicate all the code inside of async def graphql, and change the validation step in order to pass in my custom rules. Is it possible instead to just allow clients to pass the rules they want to use into the top-level graphql call?

Scalar variables failing validation

When I validate these queries against this schema (via this code), I'm getting this error:

graphql.error.graphql_error.GraphQLError: Unknown type 'Int'.

/home/astraluma/code/gobuildit/gqlmod/testmod/queries.gql:15:30
14 | 
15 | query HeroComparison($first: Int = 3) {
   |                              ^
16 |   leftComparison: hero(episode: EMPIRE) {

The results of pip freeze:

astor==0.8.0
-e [email protected]:go-build-it/gqlmod.git@6fe690f7ffb0634f4522ded57be8f40b21205c52#egg=gqlmod
graphql-core==3.0.0a2
import-x==0.1.0
pkg-resources==0.0.0

I'm pretty new to GraphQL, but as I understand it, Int is a builtin scalar that should always be available?

Update mypy

Currently we use mypy 0.720 because it is the last version that supports the "old" semantic analyzer.

The new semantic analyzer creates a few errors that need to be resolved. Some of these errors are actually mypy issues, like python/mypy#7203. Maybe we should wait until these mypy issues are resolved.

After upgrading to a newer mypy version, the setting "new_semantic_analyzer = False" in mypy.ini should be removed.

There doesn't seem to be a way to override is_type_of on objects

In my server, I have the following lines: https://github.com/ezyang/ghstack/blob/973ef7b25a71afa8f813cd8107f227938b3413f1/ghstack/github_fake.py#L288

GITHUB_SCHEMA.get_type('Repository').is_type_of = lambda obj, info: isinstance(obj, Repository)  # type: ignore
GITHUB_SCHEMA.get_type('PullRequest').is_type_of = lambda obj, info: isinstance(obj, PullRequest)  # type: ignore

I don't know how to put these on the Repository/PullRequest classes, in the same way resolvers can be put on the appropriate object class and then automatically called. The obvious things (e.g., adding an is_type_of method) do not work.

Dataloader in multi-threaded environments

Hi,

I've been thinking a bit about how we could implement the Dataloader pattern in v3 while still running in multi-threaded mode. Since v3 does not support Syrus's Promise library, we need to come up with a story for batching in async mode, as well as in multi-threaded environments. There are many libraries that do not support asyncio and there are many cases where it does not make sense to go fully async.

As far as I understand, the only way to batch resolver calls from a single frame of execution would be to use loop.call_soon. But since asyncio is not threadsafe, that means we would need to run a separate event loop in each worker thread. We would need to wrap the graphql call with something like this:

def run_batched_query(...):
    loop = asyncio.new_event_loop()
    execution_future = graphql(...)
    loop.run_until_complete(result_future)
    return execution_future.result()

Is that completely crazy? If yes, do you see a less hacky way? I'm not very familiar with asyncio so I would love to get feedback.

Cheers

No line numbers in errors

I see this issue here complaining of lack of traceback, apparently fixed in 1.0.2
#23

My experience on 1.0.5 is that I only get a text representation of the error, and no line number of where the problem lies, which makes debugging very hard.

name 'error' is not defined

GraphQL request (32:2)
31:
32: {info}

I only get the place in the query that it's failed - which doesn't help enough.

race condition on subscription

https://github.com/graphql-python/graphql-core-next/blob/4d3b7824fc16554fea48f29b8a2a617e760056d6/graphql/subscription/map_async_iterator.py#L27-L41

This race condition becomes especially apparent when the subscription is being used in an "async-iterable" sort of mode:

received = []
def _receive(subscription):
    async for result in subscription:
        received.append(result)
task = loop.create_task(_receive())

# ...
await subscription.aclose()

# e.g. ... await asyncio.sleep(.1)
assert not task.done()

# might I suggest using something like a SENTINEL which isn't passed through...
# here, I'm manually setting it, but you could set it, listen for it on EventEmitter or
# EventEmitterAsyncIterator
SENTINEL = object()
subscription.iterator.queue.put_nowait(SENTINEL)
await asyncio.sleep(.1)

assert task.done()

Middleware defeats introspection without await check

Fantastic project BTW!

I'm not sure this is a bug, but it caught me out. Maybe it should just be documented.

The following middleware:

class MyMiddleware:
    async def resolve(self, next, root, info, *args, **kwargs):
        return await next(root, info, *args, **kwargs)

will break the introspection query as the 'next' handler is not awaitable.

This fixes it:

from inspect import isawaitable
class MyMiddleware:
    async def resolve(self, next, root, info, *args, **kwargs):
        response = next(root, info, *args, **kwargs)
        return await response if isawaitable(response) else response

Also would it be possible to release 1.0.1 on PyPi as it has all the cool middleware and context stuff :)

"extend type" not supported in input to build_schema

Hello!

We have started migrating our GraphQL lib to GraphQL-core-next and things are looking great so far. build_schema util allowed us to remove our own implementation that did the same thing. However, we've found that if you try to extend existing type using extend, it will silently skip it and do nothing.

Looking around the code, I see that logic for extending the schema is quite complexy, and so can understand why build_schema doesn't support it, but perhaps extend_schema should support differences only mode, or part of its logic could be extracted to utility function returning diff from schema and Source?

Apollo-Server for reference of what we are trying to do.

Thank you for all the amazing work!

Custom scalars using numpy arrays fail is_nullish() test

I have a custom scalar type for a numpy array:

def serialize_ndarray(value):
    return dict(
        numberType=value.dtype.name.upper(),
        base64=base64.b64encode(value).decode()
    )

ndarray_type = GraphQLScalarType("NDArray", serialize=serialize_ndarray)

is_nullish() does an inequality test on values to work out if they are NaN:

def is_nullish(value: Any) -> bool:
    """Return true if a value is null, undefined, or NaN."""
    return value is None or value is INVALID or value != value

Unfortunately, numpy arrays break this because they override __eq__ to return a pointwise equality array rather than a boolean.

The comment for is_nullish() suggests that value != value is checking for NaN. If this is the case we could replace it with:

def is_nan(value):
    """Return true if a value is NaN"""
    try:
        return math.isnan(value)
    except TypeError:
        return False

def is_nullish(value: Any) -> bool:
    """Return true if a value is null, undefined, or NaN."""
    return value is None or value is INVALID or is_nan(value)

Would this be acceptable? I can provide a PR if it is.

ETA for 3.0.0 release

Thank you for the hard work porting the reference GraphQL implementation to python3. We are hoping to start using this library once the stable 3.0.0 version is released. Do you have a rough idea of when this will be?

Installation instructions on README for pipenv fails

Installation instructions the README for pipenv fails with error Could not find a version that matches graphql-core>=3 because pipenv is ignoring pre-release versions.

ERROR: Could not find a version that matches graphql-core>=3
Tried: 0.4.9, 0.4.11, 0.4.12, 0.4.12.1, 0.4.13, 0.4.14, 0.4.15, 0.4.16, 0.4.17, 0.4.18, 0.5, 0.5.1, 0.5.2, 0.5.3, 1.0, 1.0.1, 1.1, 2.0, 2.0, 2.1, 2.1, 2.2, 2.2, 2.2.1, 2.2.1
Skipped pre-versions: 0.1a0, 0.1a1, 0.1a2, 0.1a3, 0.1a4, 0.4.7b0, 0.4.7b1, 0.4.7b2, 0.5b1, 0.5b2, 0.5b3, 1.0.dev20160814231515, 1.0.dev20160822075425, 1.0.dev20160823054952, 1.0.dev20160909030348, 1.0.dev20160909040033, 1.0.dev20160920065529, 1.2.dev20170724044604, 2.0.dev20170801041408, 2.0.dev20170801041408, 2.0.dev20170801051721, 2.0.dev20170801051721, 2.0.dev20171009101843, 2.0.dev20171009101843, 2.1rc0, 2.1rc0, 2.1rc1, 2.1rc1, 2.1rc2, 2.1rc2, 2.1rc3, 2.1rc3, 3.0.0a0, 3.0.0a0, 3.0.0a1, 3.0.0a1, 3.0.0a2, 3.0.0a2

Could be fixed by specifying the LT on the pipenv installation command like so:
pipenv install "graphql-core>=3<4"

Consider changing the packaging tool

Instead of pipenv, we may want to use a different packaging tool like flit or poetry, and replace setup.py and pipfile with a pyproject.toml file. The main reason is that while pipenv is a nice tool, it is actually intended to package apps, not libraries like graphql-core-next.

Would like to hear thougts and recommendations of others regarding this issue.

calling aclose() on a subscription doesn't cancel task

I put together a new version of graphql-ws specifically designed for graphql-core-next, and one problem I've run into is that calling aclose() on an async generator doesn't cancel whatever is waiting on its __anext__() (https://bugs.python.org/issue28721)

The result is that subscriptions in graphql-core-ws aren't really cancellable, and instead emit the following error:

Task was destroyed but it is pending!
task: <Task pending coro=<<async_generator_asend without __name__>()> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10ed07d68>()]> cb=[_wait.<locals>._on_completion() at /Users/dfee/.pyenv/versions/3.7.0/lib/python3.7/asyncio/tasks.py:436]>
^

BTW, you can check out that repo (pre-publish) at https://github.com/dfee/graphql-next-ws

Support "container_type" for input object types

GraphQL-Core supports a containter_type callable for input object types which is used for implementing the container feature of Graphene input object types.

GraphQL-Core-Next should support a similar functionality that can be used by Graphene.

mypy seemingly can't find types for this package

When I attempt to run mypy, I get:

ghstack/github_fake.py:1: error: Cannot find module named 'graphql'
ghstack/github_fake.py:1: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports

Now, I know that you guys have types... but the mypy docs seem to suggest that you need to do something extra to get mypy to treat the third party package as having types?

As a temporary workaround, I've symlinked graphql in my local directory to graphql-core-next/graphql and that convinces mypy to check the ytpes.

Get schema data for query fields?

Is there a standard way to get the schema information (type, directives) for a query node/leaf?

Concretely, I'm looking at GitHub's GraphQL, and there's two things I would like to be able to do:

  • Handle custom scalars: GitHub defines a handful of custom scalars (a few date/time types, a few URL types, a structured-data blob)
    • Serialization: Look at the types in the query variables, traverse, find serializers for the scalars, apply to data
    • Deserialization: Look at types in the query body, traverse, find deserializers for the scalars, apply to data
  • Directives: Github defines an @preview() with data that needs to be conveyed in the HTTP headers. I would like to be able to examine a query, get the directives, and pull out the appropriate information

I think the thing I would like is a utility to walk a query (field-by-field) and get the type information for that field.

asyncio example in readme raises TypeError

The asyncio example in README.md raises

TypeError: __init__() got an unexpected keyword argument 'resolve'

Here is the example:

import asyncio
from graphql import (
    graphql, GraphQLSchema, GraphQLObjectType, GraphQLField, GraphQLString)


async def resolve_hello(obj, info):
    await asyncio.sleep(3)
    return 'world'

schema = GraphQLSchema(
    query=GraphQLObjectType(
        name='RootQueryType',
        fields={
            'hello': GraphQLField(
                GraphQLString,
                resolve=resolve_hello)
        }))


async def main():
    query = '{ hello }'
    print('Fetching the result...')
    result = await graphql(schema, query)
    print(result)


loop = asyncio.get_event_loop()
try:
    loop.run_until_complete(main())
finally:
    loop.close()

Translatable descriptions

graphene-django 2 passes a lazy translated string as description if that's what's used on the model. This way the descriptions can be translated to the language current at the introspection time.

But graphql-core 3 asserts the description is an str, which forces choosing the translation at the definition time.
I couldn't find such type check in GraphQL.js - they just declare descriptions as strings, but don't check that (at least not in definition.js).

Would it make sense to loosen that restriction a bit? Or even drop the type check, and add an str() around descriptions in introspection?

I see similar issue would apply to deprecation_reason - they are both human-readable text.

Document difference from graphql-js

@Cito You doing a great job keeping this lib in sync with graphql-js.
Also big thanks for taking the time to submit PRs back to graphql-js ๐Ÿ‘

I was browsing the source code and notice that you already implemented middlewares:
https://github.com/graphql-python/graphql-core-next/blob/master/graphql/execution/middleware.py

Would be great to have a list of such changes so we can figure out is it something Python specific or we should push it into graphql-js.

I also interested if you added any new tests that we can mirror in graphql-js.

Exceptions in subscribe() are not handled by executor

Hello!

When testing the subscriptions we've noticed that exceptions raised in subscribe() are not handled by the query executor, instead they come back up outside of graphql, when we are consuming the results to create an HTTP response, with Task exception was never retrieved being logged by Python to console.

Quick fix we did for this was wrapping whole loop in try/except and handling those exceptions there, but this has obvious downside of not having GraphQLError with path and location.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.