Giter Club home page Giter Club logo

ash_graphql's Introduction

Logo Logo

Elixir CI License: MIT Hex version badge Hexdocs badge

AshGraphql

Welcome! This is the extension for building GraphQL APIs with Ash. The generated GraphQL APIs are powered by Absinthe. Generate a powerful Graphql API in minutes!

Tutorials

Topics

Reference

ash_graphql's People

Contributors

ahey avatar alexfreska avatar barnabasj avatar bcksl avatar bravely avatar col avatar davidebriani avatar dependabot[bot] avatar dolfinus avatar frankdugan3 avatar ianknauer avatar infinitis avatar janajri avatar jeremygrant avatar jichon avatar jimsynz avatar joshprice avatar marmor157 avatar michaelst avatar moxley avatar pedroc20 avatar pendragondevelopment avatar rbino avatar sevenseacat avatar thefirstavenger avatar tunchamroeun avatar vonagam avatar wolfdan avatar zachdaniel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ash_graphql's Issues

(FunctionClauseError) no function clause matching in MyApp.Schema.

Is your feature request related to a problem? Please describe.

Looks like a bug or the docs could be improved:

Whenever I make a change to anything relating to the Graphql api I'm getting the following error relating to the schema file lib/my_app/schema.ex in VSCode:

(FunctionClauseError) no function clause matching in MyApp.Schema."-inlined-__absinthe_function__/2-"/2    
    
    The following arguments were given to MyApp.Schema."-inlined-__absinthe_function__/2-"/2:
    
        # 1
        {Absinthe.Blueprint.Schema.ObjectTypeDefinition, :mutation}
    
        # 2
        :is_type_of
    
    (my_app 0.1.0) MyApp.Schema."-inlined-__absinthe_function__/2-"/2

Stacktrace:
  │ (absinthe 1.7.0) lib/absinthe/phase/schema/inline_functions.ex:40: Absinthe.Phase.Schema.InlineFunctions.inline_function/3
  │ (elixir 1.14.0) lib/enum.ex:2468: Enum."-reduce/3-lists^foldl/2-0-"/3
  │ (absinthe 1.7.0) lib/absinthe/phase/schema/inline_functions.ex:31: Absinthe.Phase.Schema.InlineFunctions.inline_functions/3
  │ (elixir 1.14.0) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
  │ (elixir 1.14.0) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
  │ (absinthe 1.7.0) lib/absinthe/phase/schema/inline_functions.ex:18: Absinthe.Phase.Schema.InlineFunctions.inline_functions/3
  │ (absinthe 1.7.0) lib/absinthe/blueprint/transform.ex:16: anonymous fn/3 in Absinthe.Blueprint.Transform.prewalk/2

for example, I just edited the moduledoc for one of the ash APIs included in the GQL schema and it caused the above. These messages go away if I restart VSCode

Describe the solution you'd like
This error seems to go away if the statement

  mutation do
  end

is added to the Absynth schema.

So if this is the solution then it would be good to add that to the getting started doc.

Load relationships from custom queries

I try to make custom query that returns type defined in ash graphql

This is my user type

defmodule Core.Account.User do
  use Ash.Resource,
    data_layer: AshPostgres.DataLayer,
    extensions: [AshAuthentication, AshGraphql.Resource]

  #...

  relationships do
    has_many :reviews, Core.Review.Review do
      api Core.Review
      destination_attribute :writer_id
    end
  end

  graphql do
    type :user
    hide_fields([:hashed_password])

    queries do
      read_one :user_by_email, :get_by_email
    end
  end
end

And this is my review type

defmodule Core.Review.Review do
  use Ash.Resource,
    data_layer: AshPostgres.DataLayer,
    extensions: [AshGraphql.Resource]

  #...

  relationships do
    belongs_to :writer, Core.Account.User do
      api Core.Account
      allow_nil? false
    end
  end

  actions do
    defaults [:read]

    create :create do
      primary? true
      argument :writer, Core.Account.User, allow_nil?: false
      change manage_relationship(:writer, type: :append)
    end
  end

  graphql do
    type :review
    queries do
      list :reviews, :read
    end
  end
end

And this is my schema

defmodule Graphql.Schema do
  use Absinthe.Schema

  @apis [Core.Account, Core.Classification, Core.Review]

  use AshGraphql, apis: @apis

  query do
    field :viewer, type: non_null(:user) do
      resolve(fn _parent, _args, resolution ->
        case resolution.context.actor do
          nil -> {:error, "Not authenticated"}
          actor -> {:ok, actor} # <- |> Core.Account.load!(:reviews)
        end
      end)
    end
  end
end

And finally this is my query

viewer {
    id
    email
    reviews {
      writer {
        id
      }
      name
    }
  }
  
  userByEmail(email: "email") {
    reviews {
      writer {
        email
      }
    }
  }

What happened is

  1. When I use custom query viewer I can get user's id, email
  2. When I try to get reviews from viewer query. I got an error "Cannot return null for non-nullable field" of reviews[0].id
  3. When I do |> Core.Account.load!(:reviews) before return user in custom query there is no error and I can get viewer.reviews
  4. But if I try to get viewer.reviews.writer I got error again. it seems like there is no load logic exist when I use custom query
  5. If I try userByEmail that generated by ash. I can query nested fields.

Can AshGraphql load relationship fields when query or mutation is custom?

No GraphQL fields exposed by AshGraphQL

After following the Ash Framework tutorial, then the AshGraphQL tutorial, the resulting GraphQL schema does not contain any of the fields that should have been created from the tutorial.

$ curl 'http://localhost:8081/gql' -H 'Content-Type: application/json' --data-raw '{"query":"{ listTickets { id }}"}'
{"errors":[{"locations":[{"column":3,"line":1}],"message":"Cannot query field \"listTickets\" on type \"RootQueryType\"."}]}

I double and triple checked my code against the tutorial, but couldn't find any discrepancies.

My app is published to this public GitHub repo: https://github.com/moxley/helpdesk

I must be doing something wrong, because nobody else seems to have run into this issue.

Remove auto types for everything except `Ash.Type.Enum` and `Ash.Type.NewType`

The maintenance burden of deriving types automagically for things like this:

argument :foo, :atom do
  constraints one_of: [:foo, :bar, :baz]
end

is surprisingly high, and is surprisingly easy to get wrong. We're already getting it wrong when there are conflicts between arguments and attributes with the same name. What we need to do from now on is only do this for explicitly named Ash.Type.Enum and Ash.Type.NewType types (i.e that contain a def graphql_type/1 callback. This goes for automatically derived enums, maps and unions.

Unable to update calculation type with attribute_types

Is your feature request related to a problem? Please describe.
Hello! This may either be a bug or a feature request, but it seems I'm unable to update a calculation's graphql type through attribute_types.

Describe the solution you'd like
I'd like to wrap a calculation's type in a non_null. More specifically, I have a calculation with type {:array, {:array, {:array, :integer}}} which translates to [[[Int!]!]!] but I'd like one more non-null on the end there via attribute_types {:non_null, {:array, {:array, {:array, :integer}}}}.

Describe alternatives you've considered
I've tried creating a new Ash type using use Ash.Type.NewType but I ran into errors trying out a subtype of {:array, ...}. Ditto with a full use Ash.Type, where for some reason even the simplified def graphql_type(_), do: {:array, :integer} even gave me this error:

== Compilation error in file lib/myapp_web/schema.ex ==
** (FunctionClauseError) no function clause matching in Absinthe.Phase.Schema.Validation.TypeReferencesExist.inner_type/1

    The following arguments were given to Absinthe.Phase.Schema.Validation.TypeReferencesExist.inner_type/1:

        # 1
        {:array, :integer}

    Attempted function clauses (showing 3 out of 3):

        defp inner_type(value) when is_binary(value) or is_atom(value)
        defp inner_type(%{of_type: type})
        defp inner_type(%Absinthe.Blueprint.TypeReference.Name{name: name})

    (absinthe 1.7.6) lib/absinthe/phase/schema/validation/type_references_exist.ex:122: Absinthe.Phase.Schema.Validation.TypeReferencesExist.inner_type/1
    (absinthe 1.7.6) lib/absinthe/phase/schema/validation/type_references_exist.ex:104: Absinthe.Phase.Schema.Validation.TypeReferencesExist.check_or_error/4
    (absinthe 1.7.6) lib/absinthe/blueprint/transform.ex:16: anonymous fn/3 in 

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example

  graphql do
    ...
    attribute_types my_calc: {:non_null, {:array, {:array, {:array, :integer}}}}
  end

Support relay for associations

I've laid the groundwork for relay support, pushed to master now. There are three remaining steps:

  • when fetching :has_many or :many_to_many relationships, we need to wrap them in a connection if the destination action supports pagination

Association's calculation not being loaded on mutation

I have a mutation query that in its result loads the record and its belongs_to association. The association has a custom calculation attribute that loads a boolean value using an additional db query. Something like:

mutation ...($id: ID, $input: ...) {
  ...(id: $id, input: $input) {
    result {
      ...
      user {
        someBooleanFlag
      }
    }
  }
}

The API call fails with:

 ** (Absinthe.SerializationError) Could not serialize term #Ash.NotLoaded<:calculation> as type Boolean

I don't see the additional query in logs so I'm assuming the calculation is not called at all.

Make GQL relationship filters explicit in how they compose

Right now, you might supply a filter like this:

{
  comments: {
    title: {
      eq: "Hello"
    },
    score: {
      lt: 10
    }
  }
}

But these are unwieldy if you want to do certain kinds of composition of filters. You can use "and" and "or", but the problem is that it is not exactly clear how those will compose (and results can be surprising, since under the hood a data layer will "join"). So what the new filter syntax would look like, which would leverage the new exists(...), and also we would provide a new all(...) (actual name TBD) function in the expression syntax which would allow for this:

{
  comments: {
    any: {
      title: {
        eq: "hello"
      }
    }
  }
}

We would make this a toggle for quite a while to avoid breaking anyone's APIs :)

Docs error in `managed_relationships`

Referencing here: https://hexdocs.pm/ash_graphql/AshGraphql.Resource.html#module-managed_relationship

I believe this

  managed_relationships do
    managed_relationship :create_post, :comments
  end

Should be

  managed_relationships do
    managed_relationship :create, :comments
  end

While there, I think it might be good to also document the case where we want to use a Comment resource to derive a type for the argument rather than a json array.

Also this statement is a bit confusing:

By default, the {:array, :map} would simply be a json[] type. If the argument name is placed in this list,

I'd be glad to submit one or more PRs to update the docs. For the last bit, I'd need clarification on what is actually meant by that statement.

Add support for Relay refetching (Relay-compliant IDs and the `Node` query)

Is your feature request related to a problem? Please describe.
Ash GraphQL implements the Relay Node interface, but Relay refetching mechanism assumes that IDs are globally unique (and, basically, encode enough information to retrieve a specific object given only its ID), while currently Ash GraphQL returns the data layer primary key (possibly encoded if multiple), that even if globally-unique (e.g. UUID) still doesn't contain enough information to target a specific resource on a data layer. Moreover, Relay assumes there's a root Node query .

Describe the solution you'd like
It should be possible to use the Node refetch mechanism with Ash GraphQL.

Describe alternatives you've considered
Passing define_relay_types?: false (found on a discussion on Discord) doesn't really do much because while it allows you to manually define the node interface, it still doesn't give access to the extension points needed to handle the ID encoding/decoding.

Express the feature either with a change to resource syntax, or with a change to the resource interface
I guess this would make sense globally, so I imagine something like:

  use AshGraphQL, relay_ids?: true

I'm not sure if this should be a separate option or should take the values from define_relay_types? in the long run, but clearly it makes sense to have a separate variable until the next major.

This would take care of generating the blueprints to both encode to and decode from the :id field, embedding resource type + primary key informations.

The other aspect of this is that sometimes mutations (also queries?) can take as argument an ID to another object. For that I think that some more separate syntax has to be added explicitly marking the arguments that need to be translated:

update :frobnicate_with_the_foobar, :update, relay_ids: [:foobar_id]

I can try to tackle this if the rough plan sounds good, or let me know if I'm missing some details.

Feature: Add `has_all` filter for arrays/lists

Is your feature request related to a problem? Please describe.
Given the type attribute(:roles, {:array, :atom}, it is possible but clunky to use has and boolean logic with and to filter on where the roles field contains all items in a given list.

Describe the solution you'd like
I would like a filter function akin to roles has all items in [opts] that would handle that logic.

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example:

filter: { roles: { has_all: [OPTION1, OPTION2, ...] } }

When a graph query has no arguments, I expect the `Ash.Query` object (as visible via the `prepare` function in the DSL) to have an empty map.

I'm new to AshGraph, and while tinkering, I discovered an unexpected behavior where even when I do not define any arguments in the graph query the Ash.Query always contains the field name in the arguments map with a nil value.

    query {
      postScore {
        id
        score
      }
    }

And as seen while debugging in prepare...

      prepare(fn query, _ ->
        dbg(query)
      end)
query #=> #Ash.Query<
  resource: AshGraphql.Test.Post,
  arguments: %{score: nil},
  select: [:score, :id]
>

Some staging of the issue is here:

zorn@995e169

This comes from a Discord discussion here:

https://discord.com/channels/711271361523351632/1141111863648731197/1141128943995457657

There it was suggested that the problem might be in set_query_arguments/3, but my initial introspection could not find an issue with that part of the code. It almost feels like the arguments are coming from somewhere else.

Allow specifying that query return type can't be nil

On this query I would like to specify that the user that it returns can't be nil.

graphql do
  type :user

  queries do
    read_one :current_user, :current_user
  end
end

Do you have any thoughts about how you would want that implemented and I can open a PR for it?

Not handling Ash.NaiveDatetime

Hi! I have a resource with a :naive_datetime type:

defmodule MyApp.SomeResource do
  ... 

  attributes do
    ...
    attribute :scheduled_at, :naive_datetime
  end

  graphql do
    type :some_resource
  end
end

At compile time that results in an error:

** (FunctionClauseError) no function clause matching in AshGraphql.Resource.get_specific_field_type/3

    The following arguments were given to AshGraphql.Resource.get_specific_field_type/3:

        # 1
        Ash.Type.NaiveDatetime

        # 2
        %Ash.Resource.Attribute{name: :received_at, type: Ash.Type.NaiveDatetime, allow_nil?: true, generated?: false, primary_key?: false, private?: false, writable?: true, always_select?: false, default: nil, update_default: nil, description: nil, source: :received_at, match_other_defaults?: false, sensitive?: false, filterable?: true, constraints: []}

        # 3
        MyApp.SomeResource

It seems the naive_datetime built-in type is not being handled. I was able to fix it by manually specifying the type:

  graphql do
    type :some_resource
    attribute_types [scheduled_at: :naive_datetime]
  end

And then in MyApp.Schema i also need to import_types Absinthe.Type.Custom (which I was already doing).

It also seems that graphql fields are not being generated for the created/updated timestamps, which is unexpected.

mutations block needs to be removed for schema to properly load

Describe the bug
Context: When there are no mutations
The schema is not detected by ash_graphql if the following block is left in the MyApp.Schema file:

  mutation do
  end

To Reproduce
Start up a project with ash_graphql - if no mutations exist, but a schema for a resource exists - no schema is detected by the /playground and I presume by ash_graphql itself.

Expected behavior
The mutation do ... end should be able to be listed in the schema folder no matter what.

** Runtime

  • Elixir version
    1.11.3
  • Erlang version
    23
  • OS
  • Ash version
     {:ash, "~> 1.39.5"},
     {:ash_admin, "~> 0.2.5"},
     {:ash_postgres, "~> 0.36.2"},
     {:ash_phoenix, "~> 0.4.11"},
     {:ash_graphql, "~> 0.15.2"},
  • any related extension versions

Additional context

Return %Ash.Error.Forbidden{} if FieldPolicy fails

Is your feature request related to a problem? Please describe.

As Ash returns {:ok, resource} even if a field policy fails, I have to either manually check if all the selected fields are there before I continue or I get an error somewhere down the line.

Describe the solution you'd like
I would like Ash to return {:error, %Ash.Error.Forbidden} if a selected field is forbidden.

Describe alternatives you've considered
I considered checking all the fields myself, but that would become very tedious. I'm also not sure how this would work
with Extensions like AshGraphql.

Express the feature either with a change to resource syntax, or with a change to the resource interface

I think a global config might be nice like

config :ash, return_error_for_forbidden_field: true

and having the possibility to override it when calling the API like

Api.read(query, actor: actor, return_error_for_forbidden_field: true)

not sure if both things are necessary, if a forbidden error is returned it would also be possible to select only allowed values. In general, I would prefer to fail early instead of somewhere down the line.

Additional context
Add any other context or screenshots about the feature request here.

Better error for typos in Aggregates when using AshGraphQL

Is your feature request related to a problem? Please describe.
The AshGraphQL library gives an unhelpful error when there is a typo in an Aggregate definition where if AshGraphQL is not used Spark gives a helpful error.

Describe the solution you'd like
Somehow the Spark error should be surfaced or the AshGraphQL error should give as good of information as the Spark error

Additional context
I've created an example repo which has a simple project with two branches which contain the typo error with the only difference being that one branch has AshGraphQL installed and the other does not.

https://github.com/BryceLabs/ash_issues_examples/tree/aggregate-typo-issue
https://github.com/BryceLabs/ash_issues_examples/tree/aggregate-typo-issue-graphql

In the project there is an Order resource which has_many OrderItem resources. Order aggregates the subtotal calculation of OrderItem via a sum. However, there is a typo:

aggregates do sum :subtotal, :order_items, :sub_total end

sub_total should be subtotal

The error given by the plain Ash project is this:

** (EXIT from #PID<0.98.0>) an exception was raised:
** (Spark.Error.DslError) [App.Store.Order]
aggregates -> sub_total:
All aggregates fields must be attributes or calculations. Got: :sub_total
(ash 2.19.14) lib/ash/resource/verifiers/ensure_aggregate_field_is_attribute_or_calculation.ex:23: anonymous fn/3 in Ash.Resource.Verifiers.EnsureAggregateFieldIsAttributeOrCalculation.verify/1
(elixir 1.16.1) lib/enum.ex:2528: Enum."-reduce/3-lists^foldl/2-0-"/3
(ash 2.19.14) lib/ash/resource/verifiers/ensure_aggregate_field_is_attribute_or_calculation.ex:10: Ash.Resource.Verifiers.EnsureAggregateFieldIsAttributeOrCalculation.verify/1
lib/app/store/resources/order.ex:1: anonymous fn/1 in App.Store.Order.verify_spark_dsl/1
(elixir 1.16.1) lib/enum.ex:987: Enum."-each/2-lists^foreach/1-0-"/2
lib/app/store/resources/order.ex:1: App.Store.Order.verify_spark_dsl/1
(elixir 1.16.1) lib/enum.ex:987: Enum."-each/2-lists^foreach/1-0-"/2
(elixir 1.16.1) lib/module/parallel_checker.ex:271: Module.ParallelChecker.check_module/3

The error given when AshGraphQL is installed is this:

== Compilation error in file lib/app/schema.ex ==
** (UndefinedFunctionError) function nil.embedded?/0 is undefined. If you are using the dot syntax, such as module.function(), make sure the left-hand side of the dot is a module atom
nil.embedded?()
(ash_graphql 0.27.0) lib/resource/resource.ex:2619: AshGraphql.Resource.filterable?/2
(elixir 1.16.1) lib/enum.ex:4277: Enum.filter_list/2
(ash_graphql 0.27.0) lib/resource/resource.ex:2532: AshGraphql.Resource.aggregate_filter_fields/2
(ash_graphql 0.27.0) lib/resource/resource.ex:2498: AshGraphql.Resource.resource_filter_fields/2
(ash_graphql 0.27.0) lib/resource/resource.ex:1395: AshGraphql.Resource.args/5
(ash_graphql 0.27.0) lib/resource/resource.ex:528: anonymous fn/7 in AshGraphql.Resource.queries/6
(elixir 1.16.1) lib/enum.ex:1700: Enum."-map/2-lists^map/1-1-"/2

Associated Elixir Forum post: https://elixirforum.com/t/aggregates-causing-error-in-ashgraphql/62112

Allow customization of key `result(s)` with mutations

Is your feature request related to a problem? Please describe.
Right now when you create a mutation with AshGraphQL, the output data will be inside the result or results field.

Some people prefer to have the resource name there instead of a generic one.

For example, if the resource being returned is Offer, then the key would be offer or offers.

Describe the solution you'd like
I would like to have some options that I add to the mutation to customize that key name.

Describe alternatives you've considered
I can achieve that if I create the whole mutation manually, but that kinda defeats the purpose of using Ash

Arguments of relationship read action in nested query are not present in preparation

Is your feature request related to a problem? Please describe.

I have the following setup:

  1. Resource A:
defmodule MyApp.ResourceA do
  use Ash.Resoource, data_layer: AshPostgres.DataLayer, extensions: [AshGraphql.Resource]
  
  actions do
    read :read do
      primary? true
    end
  end
  
  attributes do
    ...
  end
  
  relationships do
    has_many :events_between_dates, MyApp.Event do
      read_action :read_between_dates
    end
  end
  
  graphql do
    type :resource_a
    
    queries do
      list :resource_a, :read
    end
  end
end
  1. Resource Event:
defmodule MyApp.ResourceA do
  use Ash.Resoource, data_layer: AshPostgres.DataLayer, extensions: [AshGraphql.Resource]
  
  actions do
    read :read_between_dates do
      argument :start_date, :date
      argument :end_date, :date
      
      prepare MyApp.Preparations.SomePreparation
      
      manual MyApp.Actions.ReadEventsBetweenDates
    end
  end
  
  attributes do
    ...
  end
  
  graphql do
    type :event
    
    queries do
      list :resource_a_between_dates, :read_between_dates
    end
  end
end
  1. This query:
query MyQuery {
  resource_a {
    events_between_dates(start_date: "2023-06-01", end_date: "2023-06-22") {
      id
    }
  }
}

In MyApp.Preparations.SomePreparation, the query doesn't contain any arguments. However, when I put filter in events_between_dates in my query, the filters are present in the query passed to the preparation as an argument. The schema however is generated properly, the arguments are available in it.

Describe the solution you'd like

It seems to me that there's a bug, which doesn't let me pass arguments to a relationship nested in a GraphQL query.

Allow paginating nested relationships

Is your feature request related to a problem? Please describe.
Currently, passing relay? true to a list query enables Connection pagination on the query result. Sometimes though the requirement is having a field paginated with a Connection. See for example the versions field in the Package type in the Github GraphQL API.

Describe the solution you'd like
There should be a way to require a list field to be paginated.

Express the feature either with a change to resource syntax, or with a change to the resource interface
Note that while relay? true is a property of the query, the paginated field should actually be a property of the type, because if e.g. an Author has a posts field that returns a PostConnection, then everything that returns an Author will have to consume posts with a Connection. This also means that the root query could be non paginated while the field is (which is actually a usecase I'm actively interested in).

So the syntax could be something like:

graphql do
  type :author

  relay_connections: [:posts]

  queries do
    list :authors_paginated, :read, relay?: true
    list :authors, :read, paginate_with: nil # Will still have :posts paginated with the Connection
  end
end

I guess the only lists that could be paginated with Connection are relationships (but I might be wrong)

Ambiguous call to relationships

The relationships macro inside graphql DSL is ambiguous.

  graphql do
    type :environment

    derive_filter?(false)

    queries do
      get :environment, :read, identity: :key
      list(:environments, :read)
    end

    relationships([:accounts])
  end
== Compilation error in file lib/myapp/organizations/resources/environment.ex ==
** (CompileError) lib/myapp/organizations/resources/environment.ex:36: function relationships/1 imported from both Ash.Resource.Dsl and AshGraphql.Resource.Graphql.Options, call is ambiguous

Calling AshGraphql.Resource.Graphql.Options.relationships works as expected.

Feature: Add `intersects` filter for arrays/lists

Is your feature request related to a problem? Please describe.
Given the type attribute(:roles, {:array, :atom}, it is possible but clunky to use has and boolean logic with or to filter on where the roles field contains any one or more of the items in a given list.

Describe the solution you'd like
I would like a filter function akin to roles intersects items in [opts] that would handle that logic.

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example:

filter: { roles: { intersects: [OPTION1, OPTION2, ...] } }

Though a traditional intersection usually returns a list of all items that are contained within both lists, in this case it could be lazy and return true for the first matching item in both lists since it's a boolean filter rather than a return value.

Type should be optional for resources that only expose generic actions

Is your feature request related to a problem? Please describe.
I am working on a resource that only needs to expose a single generic action. In that case, since the resource itself is never exposed in GraphQL, the type option is actually not needed, but AshGraphql requires me to provide one.

Describe the solution you'd like
To avoid breaking compatiblity and ensure that the choice is informed I think there should be an additional option (e.g. generate_type? false) and either type or generate_type? false has to be provided.

Alternatively, type could be enforced only for resources that expose some non-generic action.

Describe alternatives you've considered
The main alternative right now is moving the generic action in another resource that already has a connected type.

Express the feature either with a change to resource syntax, or with a change to the resource interface

  graphql do
    generate_type? false
  end

Error when sorting on aggregate that is not included in query

We have some existing GQL queries that have been successfully sorting on an aggregate, without that aggregate being requested in the response. As of this ash_postgres commit an error is raised:

[error] b401d27c-4794-4c58-9f5c-87bfa716189c: Exception raised while resolving query.

** (KeyError) key :query not found in: %Ash.Resource.Aggregate{
  name: :type_key,
  relationship_path: [:type

Adding the aggregate that is being sorted on to the query so it's included in the response resolves the issue.

I am able to reproduce the error directly in the ash_postgres test suite by commenting out this line, which produces this:

** (KeyError) key :query not found in: 
  %Ash.Resource.Aggregate{
    name: :first_comment,
    relationship_path: [:comments],
    filter: [],
    kind: :first,
    implementation: nil,
    read_action: nil,
    constraints: nil,
    type: nil,
    description: nil,
    private?: false,
    field: :title,
    sort: [title: :asc_nils_last],
    default: nil,
    uniq?: nil,
    authorize?: true,
    filterable?: true
  }

It seems like ash_graphql is not loading the aggregate when it's being sorted on but not included in the main query. This seems to be either as intended, in which case i will just include the aggregate in the query fields, or it's a bug and ash_graphql should load it automatically.

Feat: Improve pagination info

Is your feature request related to a problem? Please describe.

Relay support #25 will add page_info to the relay version of the GraphQL schema. It would be ideal to have similar page info added to the non-relay pagination schemas.

Describe the solution you'd like

Basically, it would be nice to have all the pagination conveniences that AshPhoenix gives, such as next_page?, page_number, last_page, etc.

For example

{
  posts(limit: 10, offset: 2) {
    count
    has_next_page
    has_previous_page
    page_number
    last_page
    results {
      title
      body
    }
  }
}

New pagination default not included in upgrade guide

In Ash GraphQL 1.0.0-rc.3, For list queries where the resource action has pagination, the graphql query now returns paginated results when previously it didn't.

I actually prefer this, but it is a change that I didn't see in the upgrade guide. I think the new default is :keyset if available.

UTC timestamps are improperly formatted (bug)

I have an attribute that is utc_datetime_usec on the resource side. That resource can be accessed by the GraphQL API consumer. It behaves properly in the iex console:

iex([filtered]@localhost)5> [filtered].Broker.get_quote_by_id!("fd941de4-d9f0-4419-8265-1bead97af732", authorize?: false).expires_at
~U[2023-10-30 14:01:10.918342Z]

However, the GraphQL result is missing the UTC suffix:

Zrzut ekranu 2023-10-30 o 15 04 09

which breaks frontend code that tries to parse it into a dayjs object.

Error in graphql playground with `type: :append`

I have a resource that looks similar tot his:

actions do
  update :update do
    argument :parts, {:array, :map}, allow_nil?: false
    change manage_relationship(:parts, type: :append)
  end
end

relationships do
  has_many :parts, Part
end

graphql do
  type :widget

  mutations do
    update :add_parts_to_widget, :update
    managed_relationships do
      manage_relationship(:update, :parts, type_name: :parts_to_add_input)
    end
  end
end

This compiles fine but when I try to load the graphql playground it says "No Schema Available" and in the browser console I see:

Uncaught (in promise) Error: PartsToAddInput fields must be an object with field names as keys or a function which returns such an object.

This works if I change the type to type: :direct_control, but IIUC that fails to capture the intended behavior that this action should only allow adding new elements to the relationship.

Error when using `paginate_with: nil` and the default `read` action in Ash 3.0

I'm seeing an issue with AshGraphl and the new pagination in the default read action

I followed the upgrade guide and added paginate_with: nil to my graphql query

queries do
  list :list_categories, :read, paginate_with: nil
end

And call the query like this

query ListCategories {
  listCategories {
    id
    name
    icon
  }
}

The types are happy but when running the query I get this error

** (KeyError) key :id not found in: %{
  count: %Absinthe.Type.Field{
    identifier: :count,
    name: "count",
    description: "Total count on all pages",
    type: :integer,
    ...
  },
  results: %Absinthe.Type.Field{
    identifier: :results,
    name: "results",
    description: "The records contained in the page",
    type: %Absinthe.Type.List{
      of_type: %Absinthe.Type.NonNull{of_type: :category}
    },
    ...
  },
  ...

Support sorting by calculations and calculations with arguments

Currently, the sort_input only considers attributes and arguments (it was written before calculation sortability was created).

https://github.com/ash-project/ash_graphql/blob/master/lib/resource/resource.ex#L2005

What we likely need to do is have an option when using use AshGraphql like sort_schema: :simple | :complex, with :simple being the default. Eventually we may actually just remove :simple, after enough time has passed, the primary thing here is not to break user's APIs without them knowing. We could also, support a resource-specific toggle, and make separate types, like :simple_sort and :complex_sort types. Right now, simple_sort iterates over all sortable field types, and makes an enum that consists of all of those fields.

The current pattern on input looks like this:

{
  sort: [
    {
      field: "field1",
      order: "ASC"
    },
    {
      field: "field2",
      order: "DESC"
    }
  ]
}

To support filtering on calculations with arguments, what we are going to need to do is the following:

{
  sort: [
    {
      calc1: {
        input: {...}, // only part of the schema if its a calculation that takes arguments
        order: "ASC"
      }
    },
    {
      calc2: {
        input: {...},
        order: "DESC"
      }
    }
  ]
}

This uses the fact that an object type definition (as opposed to an enum type) will allow us to properly type the input object for each key/value in the sort object type. The annoying thing about the latter schema is that the user could try to "combine" them by including multiple keys. We will have to return an error in that case. Eventually, the @oneOf directive will be a builtin part of the graphql spec, so we should potentially just use that, and add a validator for that directive: graphql/graphql-spec#825.

Allow adding a description at the query/mutation level

Is your feature request related to a problem? Please describe.
Right now, the description shown in the Graphql docs is taken from the action description, but there could be multiple queries/mutation which actually use the same action (e.g. :list and :get could both use the default :read action).

Describe the solution you'd like
There should be a way to customize description at the query/mutation level.

Describe alternatives you've considered
It's possible to work around this by having multiple different actions with different descriptions which do the same thing, but this quickly spirals out of control if the actions need to be kept in sync.

Express the feature either with a change to resource syntax, or with a change to the resource interface

queries do
  get :get_post, :read do
    description "Gets a single post by its ID"
  end

  list :list_posts, :read do
    description "Lists all posts"
  end
end

The description would basically be

query.description || query_action.description

Validation errors should include a path

Is your feature request related to a problem? Please describe.

Validation errors include a fields list that is snakecase, but should instead include a path that is camelCase.

Given a mutation like this:

mutation {
  addOverride(input: {  override: { variantKey: $variantKey}}) { ... }
}

if your variantKey is invalid (InvalidArgument), you'll get a response like this:

%{"data" => %{"addOverride" => %{"errors" => [%{"code" => "invalid_argument", "fields" => ["variant_key"], "message" => "variant key not found"}], "result" => nil}}}

Describe the solution you'd like

You'll want a response like this with path and camelCase

%{"data" => %{"addOverride" => %{"errors" => [%{"code" => "invalid_argument", "path" => ["input", "override", "variantKey"], "message" => "variant key not found"}], "result" => nil}}}

Additional context
I'm unfamiliar with the graphql spec and conventions, there may be a standard here that I'm unaware of.

Keyset pagination - field must be included in query when sorting on that field

When performing a keyset paginated read that sorts on a field, no results are returned if that same field is not also included in the GQL query. The error only occurs when loading the second page specifying an after value. It occurs with ash_graphql v0.25.10, and ash main and is low priority as a work-around exists.

This query to get the first page works as expected:

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: INSERTED_AT}]
  first: 1
) {
    count
    startKeyset
    endKeyset
    results{
      id
    }
  }
}

Taking the endKeyset value from the first query, and using it to request the second page causes no results to be returned:

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: INSERTED_AT}]
  first: 1
  after: "keyset value from first query here"
) {
    count
    startKeyset
    endKeyset
    results{
      id
    }
  }
}

The response is:

{
  "data": {
    "keysetPaginatedPosts": {
      "count": 6,
      "endKeyset": null,
      "results": []
    }
  }
}

The issue can be worked around by including the field in the query, and then fetching the second page works as expected:

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: INSERTED_AT}]
  first: 1
) {
    count
    startKeyset
    endKeyset
    results{
      id
      insertedAt
    }
  }
}

Make sort/filter types optional and/or customizable

Is your feature request related to a problem? Please describe.
Sort/filter types are always added by default in a GraphQL query by ash_graphql, but sometimes you want to implement your own sort/filter mechanism to limit what the users can do in your database (ex. restricting them from create not well optimized queries).

Describe the solution you'd like
I think there is two solutions here, it would be great to have both if possible.

The first one is to simply give an option to disable sort or filter types for an specific action.

The second one is to allow the user to customize these types for a specific option, for example, the user can say that what are the fields that the sort type can expose as sortable. That way I have the best of both words, I can restrict what fields can be filtered or sorted but still use the ash_graphql types so that is generated automatically for me instead of having to create a custom solution..

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example

graphql do
  type :property

  queries do
    list :list_property, :read do
      # Makes PropertySortField only contain 'updated_at', 'inserted_at' and 'id' fields
      sort_by [:updated_at, :inserted_at, :id]

      # Makes PropertyFilterInput only filter by 'price', 'name' and 'address' fields
      filter_by [:price, :name, :address]
    end
  end
end

Bug: Type that uses `Ash.Type.NewType` defines multiple GraphQL types with the same name.

Having the following type

defmodule MyApp.Types.DayOfWeek do
  @moduledoc false

  use Ash.Type.NewType,
    subtype_of: :atom,
    constraints: [one_of: [:monday, :tuesday, :wednesday, :thursday, :friday, :saturday, :sunday]]

  def graphql_input_type(_), do: :day_of_week
  def graphql_type, do: :day_of_week
  def graphql_type(_), do: :day_of_week
end

That is used in two resources:

defmodule ResourceA do
  ...
  attributes do
    ...
    attribute :days_of_week, {:array, MyApp.Types.DayOfWeek} 
  end
end


defmodule ResourceB do
  ...
  attributes do
    ...
    attribute :day_of_week, MyApp.Types.DayOfWeek}
  end

  relationships do
    belongs_to :a_resource, ResourceA, allow_nil?: false
  end
end

Causes the GraphQL type to be defined multiple times resulting in the following error:

== Compilation error in file lib/my_app/schema.ex ==
** (Absinthe.Schema.Error) Compilation failed:
---------------------------------------
## Locations
my_app/deps/ash_graphql/lib/resource/resource.ex:2358
my_app/deps/ash_graphql/lib/resource/resource.ex:2358
my_app/deps/ash_graphql/lib/resource/resource.ex:2358
my_app/deps/ash_graphql/lib/resource/resource.ex:2358
my_app/lib/my_app/graphql/types/common_types.ex:63

Type name "DayOfWeek" is not unique.

References to types must be unique.

> All types within a GraphQL schema must have unique names. No two provided
> types may have the same name. No provided type may have a name which
> conflicts with any built in types (including Scalar and Introspection
> types).

Reference: https://github.com/facebook/graphql/blob/master/spec/Section%203%20--%20Type%20System.md#type-system
---------------------------------------

Clues after debugging

After some debugging I think it's because the NewType is being defined per resource as we can see in resource.ex:2297.
I've inserted IO.inspect into get_auto_enums and after resource.ex:2342 and when my type was based on Ash.Type.NewType it was in these inspects, but when I've switched back to Ash.Type.Enum it wasn't.

I'm guessing that it's being treated as "inline enum", so when you defined enum like this:

attribute :some_enum, :atom do
  constraints one_of: [:a, :b, :c]
end

because in these inspects I see "types" that are defined like this, the main difference is that for those it has unique name per resource.

It would work without defining def graphql_input, but for some reason to types defined like that the _input postfix isn't added like to those "inline enums", so there are same atoms returned for type_name and additional_type_name in resource.ex:2304.

When def graphql_input is defined, it uses this name everywhere, so there are more Locations with conflicts for Absinthe, as seen before.

AshGraphql needs to do the right thing regardless of if there is no empty mutations block or not.

I defined a resource to join two other resources, but got the following error message. I resolved this, by defining an empty mutations block

== Compilation error in file lib/myapp_web/graphql/schema.ex ==
** (Protocol.UndefinedError) protocol Enumerable not implemented for nil of type Atom
    (elixir 1.11.0) lib/enum.ex:1: Enumerable.impl_for!/1
    (elixir 1.11.0) lib/enum.ex:169: Enumerable.member?/2
    (elixir 1.11.0) lib/enum.ex:1693: Enum.member?/2
    (elixir 1.11.0) lib/enum.ex:3369: Enum.filter_list/2
    (elixir 1.11.0) lib/enum.ex:3370: Enum.filter_list/2
    (ash_graphql 0.12.0) lib/api/api.ex:51: AshGraphql.Api.mutations/2
    lib/myapp_web/graphql/schema.ex:6: MyApp.DefaultApi.AshTypes.run/2
    (absinthe 1.6.0) lib/absinthe/pipeline.ex:370: Absinthe.Pipeline.run_phase/3
    (absinthe 1.6.0) lib/absinthe/schema.ex:360: Absinthe.Schema.__after_compile__/2
    (stdlib 3.14) lists.erl:1267: :lists.foldl/3
    (stdlib 3.14) erl_eval.erl:680: :erl_eval.do_apply/6
    (elixir 1.11.0) lib/kernel/parallel_compiler.ex:314: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/

Keyset pagination - aggregate field must be included in query when sorting on that field

When performing a keyset paginated read that sorts on an aggregate field, an error occurs if that same aggregate field is not also included in the GQL query. The error only occurs when loading the second page specifying an after value.

Assuming that the Post resource has an aggregate field called authorName, this query to get the first page works as expected:

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: AUTHOR_NAME}]
  first: 1
) {
    count
    startKeyset
    endKeyset
    results{
      id
    }
  }
}

Taking the endKeyset value from the first query, and using it to request the second page causes the error to occur:

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: AUTHOR_NAME}]
  first: 1
  after: "keyset value from first query here"
) {
    count
    startKeyset
    endKeyset
    results{
      id
    }
  }
}

The error is:

** (Ash.Error.Unknown) Unknown Error

* ** (Ecto.Query.CastError) deps/ash_postgres/lib/expr.ex:485: value `#Ash.NotLoaded<:aggregate>` in `where` cannot be cast to type #Ash.Type.StringWrapper.EctoType<[]> in query

The issue can be worked around by including the aggregate field name in the query, and then fetching the second page works as expected.

query KeysetPaginatedPosts {
  keysetPaginatedPosts(
  sort: [{field: AUTHOR_NAME}]
  first: 1
) {
    count
    startKeyset
    endKeyset
    results{
      id
      authorName
    }
  }
}

Allow giving a description to enum values created with Ash.Type.Enum

AshGraphql is currently missing an ergonomic way to define an enum with some custom descriptions for each value, which are useful for introspection/documentation of the GraphQL schema.

The documentation suggests two possible ways to define a custom enum.

The first one as far as I know doesn't allow to set a custom description for each value.

The second one basically involves copypasting a piece of the implementation of Ash.Type.Enum, and it has the problem that now the values of the enum are listed in two different parts of the application: the Absinthe type and the Ash type.

My thought is that maybe it could be possible to provide a thin wrapper around Ash.Type.Enum that also allows to also set the description, something like:

defmodule MyEnum do
  use AshGraphql.Type.Enum,
    values: [
      {:foo, "This is a foo"},
      {:bar, "This is a bar"},
      :a_value_with_no_description
    ] 
end

If this is feasible and has some value I can try to work on this

Feat: Add metadata in results (e.g. complexity, telemetry)

Describe the solution you'd like

GraphQL spec allows for extension by adding fields outside of data in the results. We can leverage this to add something like metadata to add useful information about the query, such as the calculated complexity or telemetry data like query duration.

For example:

{
  "data": { "query": { "results": [] } },
  "metadata": {
    "complexity": 13,
    "query_duration": "78"
  }
}

Additional context

The main thing that needs to be determined is how to add those fields in Absinthe. A quick look through the docs didn't reveal anything obvious, so more detailed exploration or perhaps contacting the Absinthe team may be needed to figure out a solution.

Nested queries and mutations

Is your feature request related to a problem? Please describe.

Currently the ash graphql extension puts all queries and mutations in the Query or Mutation Root (the top level).

This prevents us from exposing 2 mutations of the same name. Also with many queries or mutations it gets harder to navigate the docs and find the one you want in graphql docs.

Describe the solution you'd like

In GraphQL you can nest queries and mutations in types like a grouping structure like so:

query {
  group {
    queryA {
      id
    }
  }
}

The ability to easily make use of such a technique in ash_graphql would be useful. Could look like this but open to suggestions on what to call the nesting eg group:

graphql do
  type :resource
  
  query do
     group :group_name do
        list :resources
      end
    end
  end
end

Additional context

Error when creating new records with `managed_relationships` in Ash 3.0

Hi 👋

I'm seeing this issue with my AshGraphl mutation for creating new records with a managed_relationship

I have this create action

create :create do
  primary? true

  accept :*

  argument :transaction_id, :integer
  argument :recurrence_pattern, :map

  change relate_actor(:user)
  change manage_relationship(:transaction_id, :transaction, type: :append_and_remove)
  change manage_relationship(:recurrence_pattern, type: :direct_control)
end

When used with this code interface everything works fine

define :create_transaction do
  action :create
end

But when used with graphql it doesn't seem to work

graphql do
  type :transaction

  mutations do
    create :create_transaction, :create
  end
end

This is the error

[error] 0eeefc3d-1caf-47f4-9284-a163e64b945c: Exception raised while resolving query.

** (KeyError) key :type not found in: %{name: :recurrence_pattern}

    (ash_graphql 1.0.0-rc.3) lib/resource/info.ex:59: AshGraphql.Resource.Info.default_managed_relationship/2
    (ash_graphql 1.0.0-rc.3) lib/resource/resource.ex:1231: AshGraphql.Resource.find_manage_change/3
    (ash_graphql 1.0.0-rc.3) lib/graphql/resolver.ex:574: AshGraphql.Graphql.Resolver.handle_argument/6
    (ash_graphql 1.0.0-rc.3) lib/graphql/resolver.ex:529: anonymous fn/6 in AshGraphql.Graphql.Resolver.handle_arguments/3
    (elixir 1.16.1) lib/enum.ex:4842: Enumerable.List.reduce/3
    (elixir 1.16.1) lib/enum.ex:2582: Enum.reduce_while/3
    (ash_graphql 1.0.0-rc.3) lib/graphql/resolver.ex:1022: AshGraphql.Graphql.Resolver.mutate/2
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:234: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:189: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/3
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:174: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:145: Absinthe.Phase.Document.Execution.Resolution.resolve_fields/4
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:88: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:67: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
    (absinthe 1.7.6) lib/absinthe/phase/document/execution/resolution.ex:24: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
    (absinthe 1.7.6) lib/absinthe/pipeline.ex:408: Absinthe.Pipeline.run_phase/3
    (absinthe_plug 1.5.8) lib/absinthe/plug.ex:536: Absinthe.Plug.run_query/4
    (absinthe_plug 1.5.8) lib/absinthe/plug.ex:290: Absinthe.Plug.call/2
    (phoenix 1.7.12) lib/phoenix/router/route.ex:42: Phoenix.Router.Route.call/2
    (phoenix 1.7.12) lib/phoenix/router.ex:484: Phoenix.Router.__call__/5
    (project_ma 1.0.0) lib/project_ma_web/endpoint.ex:1: ProjectMaWeb.Endpoint.plug_builder_call/2

Make some standard fields non nullable

Generating a graphql layer for my front-end in typescript, I noticed some fields are unnecessarily nullable. To achieve a more pleasant typescript experience, I propose the following fields be made non null:

  • errors as returned by a mutation. In case of no errors, an empty list will be returned. Furthermore, each item in the error list can also be made not null, since it either exists in the list or not, and we don't want nulls in the list.
  • error.fields

In addition to these fields, I would also like to see the the _result fields be made non nullable, as they will always contain the result and errors fields, except in the case of root_level_errors? being enabled, in which case i suppose the _result field should be nullable.

See here for an example of the changes (not ready for merge, just for discussion)

This would be a breaking change to the existing api. We could keep the existing behaviour as default, and add a config setting to enable the new behaviour. The existing behaviour should be considered deprecated and we could warn to use the flag, and default to use it in v1.0.0.

What do you think?

Support get query that doesn't require an id

This below will error if you try to query without an id even though we don't require one.

actions do
  read :current_user do
    filter id: actor(:id)
  end
end

graphql do
  type :post

  queries do
    get :current_user, :current_user
  end
end

I would be happy to do the PR for this if you let me know how you would want it implemented, maybe a new verb? Because I could see wanting to keep ID! as a requirement for the queries where it is required.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.