Giter Club home page Giter Club logo

network-of-terms's Introduction

Network of Terms

The Network of Terms is a search engine for finding terms in terminology sources (such as thesauri, classification systems and reference lists).

Given a textual search query, the Network of Terms searches one or more terminology sources in real-time and returns matching terms, including their labels and URIs. The Network of Terms offers a simple search interface, handles errors gracefully in case a source does not respond well and harmonizes the results to the SKOS data model.

The Network of Terms is intended for managers of heritage information that want to improve the findability of their information by assigning terms from terminology sources that are used by the institutions in the Dutch Digital Heritage Network. Information managers use the Network of Terms in their collection registration system.

Schematically, the registration system sends out a single query to the Network of Terms, which gets translated to a set of queries that is appropriate for each terminology source. The terms matching the query are harmonized to SKOS and returned to the collection registration system where information managers can evaluate the results and link their data to the terms:

flowchart TD
  crs(Collection registration systems)
  not(Network of Terms)
  ts1("Terminology source 1"):::ts
  ts2("Terminology source 2"):::ts

  style not fill:#D5E8D4,stroke:#82B366
  style crs fill:#FFF2CC,stroke:#D6B656  
  classDef ts fill:#DAE8FC,stroke:#6C8EBF;
  
  crs -- search query ---> not
  not -. search results -.-> crs
   
  not -- search query ---> ts1
  ts1 -. search results -.-> not
  
  not -- search query ---> ts2
  ts2 -. search results -.-> not

Getting started

For users

If you just want to search the Network of Terms using a web interface, have a look at our demonstrator, a web interface on top of the GraphQL API.

For application developers

If you’re a software developer who wants to implement terms search/lookup in your software (such as collection management systems), you probably want to use the Network of Terms GraphQL API.

For Network of Terms developers

If you want to make changes to the Network of Terms code or catalog, the best way to get started is to run the application locally using Node or in a development Docker container.

Packages

This repository contains the following packages:

Contributing

You’re very welcome to contribute to this repository:

network-of-terms's People

Contributors

coret avatar ddeboer avatar dependabot[bot] avatar ennomeijers avatar kiivihal avatar lvanwissen avatar rschalkrce avatar sdevalk avatar wmelder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

network-of-terms's Issues

Cache results

Makes most sense for URI lookups (because they will have a higher hit ratio) but we may also want to cache search results.

Caching in a shared in-memory store (Redis) would be most efficient. We can begin with caching in each Node runtime separately.

Decided with @sdevalk to do #265 first, then come up with a generic caching solution that applies to term search and lookup as well as reconciliation requests.

Bug: empty result set if source contains one term

If a source returns just one term, this term is ignored by the TermsTransformer, yielding an empty result set. For example:

./bin/run sources:query --identifiers cht --query rembrandt --loglevel info

The TermsTransformer wrongly assumes that the source contains more terms:

https://github.com/netwerk-digitaal-erfgoed/network-of-terms-comunica/blob/master/src/services/terms.ts#L37

We need to refactor this code a bit. Point of attention: terms arrive in a stream, so we don't know beforehand how many terms there are.

Add K3

Kerken, Kloosters and Kapellen (K3).

A SPARQL endpoint should be delivered this week.

Remove unused HDT and TPF distributions

These will not be used by the Network of Terms due to performance issues, so let’s remove them from catalog.ttl. The NoT will use SPARQL only (for now).

Add highlevel documentation of the network-of-terms

Regarding the network-of-terms-api repository as the central repository for the network-of-terms I would expect to find a high level overview of the Network-of-Terms functionality, maybe similar to what is being published in FAQ in the front-end. This would be very helpful for people from outside our network that are interested in our work.

Expose availability of term sources

Term sources may time out (#210) or return erroneous responses. When this happens, it may be useful to store and expose an availability status for each source. That status can then be shown in the UI.

Proposal

  • When a source times out when answering user queries, mark it is unavailable (in an in-memory store).
  • When a source that has previously been marked unavailable responds again to user queries, mark it as available.
  • Expose the availability status in the API so it can be picked up by clients.

Questions

  1. Should we consider a source unavailable after a single timeout? Or n timeouts?
  2. Should we include the status in the existing sources API call or create a separate endpoint?

/cc @EnnoMeijers @sdevalk

Support for searching non-Dutch terms in AAT

Currently the aat.rq search query requires prefLables to be defined as Dutch using the following non-optional clause:
?uri skosxl:prefLabel ?prefLabel_uri .
?prefLabel_uri dcterms:language aat:300388256 . # Dutch (language)

This leads to the problem that non-Dutch labels which are commonly used are not being found e.g. "alba amicorum' (http://vocab.getty.edu/aat/300374942). This is a Latin term that is commonly used for book categories but has no Dutch prefLabel which is probably correct because it is a Latin term. Is there a way to support finding these terms without flooding the results with hits from many different languages?

Example query returns an error

See the following example query in the README.md:
bin/run sources:query --identifiers nta --query "'Wier*'"

This returns the following error: Error: Invalid SPARQL endpoint (http://data.bibliotheken.nl/sparql) response: undefined

More example queries have problems, please check them or correct the README.

Refactor service instantiation

Instantiate service objects only once, and parameterise all required arguments in function calls on those service objects. This was started in #251 (comment), so ensure we apply this pattern consistently throughout the codebase.

Use templates to generate queries

@mielvds suggested Grasp as a way for the Network of Terms to manage its SPARQL queries. Graph’s use case is different: a single GraphQL query results in a single SPARQL query, whereas in the NoT we have different queries per endpoint.

Still, the idea of using templates to generate SPARQL queries is interesting.

Pros

  1. We can move pre-processing from network-of-terms-api back to the catalog. This results in a better separation of concerns, as the catalog already knows about SPARQL endpoints. The API would no longer supply several query variants. Instead, the catalog calls the query variant functions as template functions.
  2. We can reduce duplication in queries, particularly between lookup and search queries that are usually very similar.
  3. Some operations that are impossible or very complex to express in plain SPARQL can be expressed in the template language.

Cons

  1. Flat .rq files are easy to debug and collaborate on with third parties. Collaborators need to understand the template language and the (TypeScript) functions called from the templates to see what’s going on.

@EnnoMeijers What do you think?

Reconciliation API

https://docs.google.com/document/d/1AUeMAiBW_gNiWZM-OYJW-iY_zaDbalEriIPiDBxuQW4

https://reconciliation-api.github.io/specs/latest/

https://docs.openrefine.org/manual/reconciling

Questions

  • Should we call each source simply by its name (e.g. ‘RKDartists’) or make it clear that the reconciliation is provided not by the source itself but by us (e.g. RKDartists via Termennetwerk)?
  • Should we add our reconciliation sources to the list of public endpoints?
  • Should we have another way for clients to discover our reconciliation service URLs? List of URLs is now available at https://termennetwerk.netwerkdigitaalerfgoed.nl/faq.

Progress

Monorepo structure

A proposal to restructure the Network of Terms repositories and code that we currently have:

  • Rename this repository to network-of-terms.
  • Use NPM 7 workspaces for a monorepo with the following structure:
    • catalog (= network-of-terms-catalog)
    • query (shared query logic), which requires catalog (and Comunica)
    • graphql (or graphql-api), which requires query
    • reconciliation (or reconciliation-api), which requires query
  • Publish both catalog and query to NPM.
  • Archive network-of-terms-catalog.
  • Should we rename the packages from the unwieldy @netwerk-digitaal-erfgoed/network-of-terms-graphql etc. to @network-of-terms/graphql etc.?

A downside is the cascade of package updates if we make changes to catalog.

To make the contents of catalog better pluggable and easier to customise, we could apply a further subdivision:

  • Make catalog purely RDF and add a web server (move TypeScript files from catalog to query).
  • Host catalog as a separate app in the infrastructure; query then reads the catalog’s RDF from a configured HTTP address. Now query no longer has a package dependency on catalog.

Make queries fault-tolerant

When a single source fails, it causes all results to fail. Probably related to our use of Promise.all().

For instance:

query Terms {
  terms(sources: ["http://vocab.getty.edu/aat/sparql", "https://data.netwerkdigitaalerfgoed.nl/NMVW/thesaurus/sparql"], query: "griek*") {
    source {
      uri
      name
      creators {
        uri
        name
        alternateName
      }
    }
    terms {
      uri
      prefLabel
      altLabel
      hiddenLabel
      scopeNote
    }
  }
}

Queries to Wikidata fail

[2021-07-07T07:07:30.346Z]  INFO: Requesting https://query.wikidata.org/sparql {
  headers: {
    accept: 'application/n-quads,application/trig;q=0.95,application/ld+json;q=0.9,application/n-triples;q=0.8,text/turtle;q=0.6,application/rdf+xml;q=0.5,application/json;q=0.45,text/n3;q=0.35,application/xml;q=0.3,text/xml;q=0.3,image/svg+xml;q=0.3,text/html;q=0.2,application/xhtml+xml;q=0.18',
    'user-agent': 'Comunica/actor-http-native (Node.js v14.15.2; darwin)'
  },
  method: 'GET',
  actor: 'https://linkedsoftwaredependencies.org/bundles/npm/@comunica/actor-init-sparql/^1.0.0/config/sets/http.json#myHttpFetcher'
}
[[2021-07-07T07:07:43.010Z]  INFO: Identified as sparql source: https://query.wikidata.org/sparql {
  actor: 'https://linkedsoftwaredependencies.org/bundles/npm/@comunica/actor-init-sparql/^1.0.0/config/sets/resolve-sparql.json#mySparqlQuadPatternResolver'
}
[2021-07-07T07:07:43.014Z]  INFO: Requesting https://query.wikidata.org/bigdata/namespace/wdq/sparql {
  headers: {
    accept: 'application/sparql-results+json;q=1.0,application/sparql-results+xml;q=0.7',
    'content-type': 'application/x-www-form-urlencoded',
    'user-agent': 'Comunica/actor-http-native (Node.js v14.15.2; darwin)'
  },
  method: 'POST',
  actor: 'https://linkedsoftwaredependencies.org/bundles/npm/@comunica/actor-init-sparql/^1.0.0/config/sets/http.json#myHttpFetcher'
}
[2021-07-07T07:07:43.015Z]  INFO: Requesting https://query.wikidata.org/bigdata/namespace/wdq/sparql {
  headers: {
    accept: 'application/sparql-results+json;q=1.0,application/sparql-results+xml;q=0.7',
    'content-type': 'application/x-www-form-urlencoded',
    'user-agent': 'Comunica/actor-http-native (Node.js v14.15.2; darwin)'
  },
  method: 'POST',
  actor: 'https://linkedsoftwaredependencies.org/bundles/npm/@comunica/actor-init-sparql/^1.0.0/config/sets/http.json#myHttpFetcher'
}
Invalid SPARQL endpoint (https://query.wikidata.org/bigdata/namespace/wdq/sparql) response: undefined

Possible cause: breaking change in fetch-sparql-endpoint 2.0.0.

Lock dependencies

I checked out this package and it seems not to be working correctly with the latest vendor versions. 😢

Suggestions:

  • Check in the package-lock.json to ensure we get a working setup any time in the future.
  • Stick to Comunica 1.12.* for now, unless comunica/comunica#669 (a regression in 1.13.0) is fixed.

Extend STCN Drukkers information

For the STCN Drukkers thesaurus new entities are defined when a printer moves from one place to another. For example printer http://data.bibliotheken.nl/doc/thes/p075573636 refers to Plantijn working in Antwerp and http://data.bibliotheken.nl/doc/thes/p075556251 refers to Plantijn working in Leiden. In the data this is expressed through the schema:addressLocality field. In the results the difference between both entities is not visibile. My suggestions is to add schema:addressLocality (if available) to the skos:scopeNote stating the locality of the printer.

Sort results alphabetically

This is to prevent questions about the order that search results appear in. Our SPARQL CONSTRUCT queries preclude us from keeping the source’s ordering or providing our own ORDER BY. The sort should therefore happen on the client-side (resolvers?).

Infer relations to improve browsing

Investigate the possibilities for inferring relations and add them to the CONSTRUCT queries. In many cases the relations are only unidirectional in the data. When browsing on URIs we could easily infer the inverse relations and present them so the user can follow the same path back.

Add tests

The catalog is tested separately. What kind of tests can we add for the API?

  1. Start with high-level, end-to-end tests against the GraphQL endpoint to get as much coverage as quickly as possible (forgetting about error localisation for now).
  2. Can we provide a mock catalog with a local endpoint so we don’t depend on the availability of sources for our tests to pass?
  3. The Register already has some Fastify tests, so have a look at those.

SPARQL query limit haphazardly breaks result sets

E.g. when quering AAT for ‘jan’ with our default LIMIT of 1000 chops off the result:

<http://vocab.getty.edu/aat/300164767> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2004/02/skos/core#Concept> .
<http://vocab.getty.edu/aat/300164767> <http://www.w3.org/2004/02/skos/core#prefLabel> "luchtvervuiling binnenshuis"@nl .
<http://vocab.getty.edu/aat/300164767> <http://www.w3.org/2004/02/skos/core#broader> <http://vocab.getty.edu/aat/term/1000486928-nl> .
<http://vocab.getty.edu/aat/term/1000486928-nl> <http://www.w3.org/2004/02/skos/core#prefLabel> "<vervuiling naar locatie of context>"@nl .
<http://vocab.getty.edu/aat/300164767> <http://www.w3.org/2004/02/skos/core#broader> <http://vocab.getty.edu/aat/term/1000486929-nl> .

As you can see, the last broader object (http://vocab.getty.edu/aat/term/1000486929-nl) gets no prefLabel, which breaks our GraphQL resolver.

As far as I know, this is normal behaviour in SPARQL: the LIMIT applies to the number of triples returned rather than the number of results (subjects).

How should we solve this?

  • ignore related terms for which we have no prefLabel;
  • or is there a way to solve this on the SPARQL query level?

Return errors when source is unreachable

At least a certain set of errors are scoped to the sources we query, which makes GraphQL’s top-level error property less suitable. Therefore, add types to our schema, introducing a union type Terms | TimeoutError | ServerError (or something similar). Clients can then conditionally retrieve results.

For inspiration: https://medium.com/@sachee/200-ok-error-handling-in-graphql-7ec869aec9bc and https://blog.logrocket.com/handling-graphql-errors-like-a-champ-with-unions-and-interfaces/.

Try to keep the query interface as simple as possible and not break BC.

Look up terms by URI

Experimented a bit, and a query like the following, which combines both query search and URI lookup, seems to work:

CONSTRUCT {
  ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2004/02/skos/core#Concept>.
  ?uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?prefLabel.
  ?uri <http://www.w3.org/2004/02/skos/core#altLabel> ?altLabel.
  ?uri <http://www.w3.org/2004/02/skos/core#hiddenLabel> ?hiddenLabel.
  ?uri <http://www.w3.org/2004/02/skos/core#scopeNote> ?scopeNote.
  ?uri <http://www.w3.org/2004/02/skos/core#broader> ?broader_uri.
  ?uri <http://www.w3.org/2004/02/skos/core#narrower> ?narrower_uri.
  ?uri <http://www.w3.org/2004/02/skos/core#related> ?related_uri.
  ?uri <http://www.w3.org/2004/02/skos/core#exactMatch> ?exactMatch.
  ?broader_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?broader_prefLabel.
  ?narrower_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?narrower_prefLabel.
  ?related_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?related_prefLabel.
}
WHERE {
  {
    ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2004/02/skos/core#Concept>;
      <http://www.w3.org/2004/02/skos/core#inScheme> <http://data.bibliotheken.nl/id/scheme/brinkman>.
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?prefLabel.
      FILTER((LANG(?prefLabel)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#altLabel> ?altLabel.
      FILTER((LANG(?altLabel)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#hiddenLabel> ?hiddenLabel.
      FILTER((LANG(?hiddenLabel)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#scopeNote> ?scopeNote.
      FILTER((LANG(?scopeNote)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#broader> ?broader_uri.
      ?broader_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?broader_prefLabel.
      FILTER((LANG(?broader_prefLabel)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#narrower> ?narrower_uri.
      ?narrower_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?narrower_prefLabel.
      FILTER((LANG(?narrower_prefLabel)) = "nl")
    }
    OPTIONAL {
      ?uri <http://www.w3.org/2004/02/skos/core#related> ?related_uri.
      ?related_uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?related_prefLabel.
      FILTER((LANG(?related_prefLabel)) = "nl")
    }
    OPTIONAL { ?uri <http://www.w3.org/2004/02/skos/core#exactMatch> ?exactMatch. }
    ?uri ?predicate ?label.
    VALUES ?predicate {
      <http://www.w3.org/2004/02/skos/core#prefLabel>
      <http://www.w3.org/2004/02/skos/core#altLabel>
    }
    FILTER(((LANG(?label)) = "nl") && (IF("" != "", bif:contains(?label, ""), false) || ?uri IN (<http://data.bibliotheken.nl/id/thes/p07568439X>)))
  }
}
LIMIT 1000

The important bit is:

    FILTER(((LANG(?label)) = "nl") && (IF("" != "", bif:contains(?label, ""), false) || ?uri IN (<http://data.bibliotheken.nl/id/thes/p07568439X>)))

I also had to move the VALUES clause to the bottom of the query to prevent an – unexplained – Virtuoso 37000 Error SP031: SPARQL compiler: No suitable triple pattern is found for a variable $label in special predicate bif:contains() at line 59 of query .

The question is: is this query still maintainable enough?

Query metrics

Monitor and (publicly) graph the amount of queries executed per term source. We could use Prometheus.

Abort request to slow or unreachable terminology source

A terminology source can be slow or unreachable. If it takes too much time to query the source, we should abort the request and inform the user about this. The default timeout of Comunica is 60 seconds, but this is too long for our use case. We should change this to e.g. 5 or 10 seconds.

Error: "Cannot read property 'id' of undefined"

If I send the query underneath to the Network of Terms...

https://termennetwerk-api.netwerkdigitaalerfgoed.nl/playground

query Terms {
  terms(
    sources: [
      "https://data.cultureelerfgoed.nl/PoolParty/sparql/term/id/cht"
    ]
    query: "kas"
  ) {
    source {
      uri
      name
      creators {
        uri
        name
        alternateName
      }
    }
    result {
      __typename
      ... on Terms {
        terms {
          uri
          prefLabel
          altLabel
          hiddenLabel
          scopeNote
        }
      }
      ... on Error {
        message
      }
    }
  }
}

...then an error is returned: Cannot read property 'id' of undefined.

Bug: quoting `bif:contains` breaks certain queries

Last week's fix introduces a new bug:

If I execute the query...

bin/run sources:query --identifiers nta --query "Wieringa" --loglevel info

...the relevant part of the SPARQL query looks like this:

?label <bif:contains> "'Wieringa'"^^<http://www.w3.org/2001/XMLSchema#string>.

Although the apostrophe isn't necessary, this query returns terms. However, if I execute the query...

bin/run sources:query --identifiers nta --query "Wieringa OR Mulisch" --loglevel info

... the relevant part of the SPARQL query looks like this:

?label <bif:contains> "'Wieringa OR Mulisch'"^^<http://www.w3.org/2001/XMLSchema#string>.

This query doesn't return terms. The previous version of the code - without the fix - did.

The part of the SPARQL query should look like this, without apostrophe:

?label <bif:contains> "Wieringa OR Mulisch"^^<http://www.w3.org/2001/XMLSchema#string>.

My guess is that we should find a way to only add apostrophes if the search query of the user doesn't contain operators such as OR or AND.

Identify datasets by their IRI rather than identifier

The dataset identifiers that we now use are made-up (albeit common) abbreviations of dataset names, for instance ‘rkdartists’ and ‘cht’. Should we replace these identifiers with proper IRIs?

Some considerations:

  • Short strings like our current identifiers look more familiar to GraphQL users than IRIs.
  • When the number of term sources grows, identifier collisions become more likely, especially if they are provided by term source managers themselves rather than NDE.
  • IRIs can be resolved for further information about the dataset, whereas identifiers are a dead end.

@sdevalk Should we allow identification only by IRI or also keep supporting the identifiers?

Searching on terms with diacritical marks

When I perform the query:

{"query":"query Terms ($sources: [ID]!, $query: String!) {\n                terms (sources: $sources query: $query) {\n                  source {\n                    name\n                    uri\n                    alternateName\n                    creators {\n                      name\n                      alternateName\n                    }\n                  }\n                  result {\n                    ... on Terms {\n                      terms {\n                        uri\n                        prefLabel\n                        altLabel\n                        hiddenLabel\n                        scopeNote\n                        seeAlso\n                        broader {\n                          uri\n                          prefLabel\n                        }\n                        narrower {\n                          uri\n                          prefLabel\n                        }\n                        related {\n                          uri\n                          prefLabel\n                        }\n                      }\n                    }\n                    ... on Error {\n                      __typename\n                      message\n                    }\n                  }\n                }\n              }","variables":{"sources":["https://data.netwerkdigitaalerfgoed.nl/rkd/rkdartists/sparql"],"query":"Joan Miró"}}

I get the error:

Invalid SPARQL endpoint response from https://api.data.netwerkdigitaalerfgoed.nl/datasets/rkd/rkdartists/services/rkdartists/sparql (HTTP status 500):Virtuoso 37000 Error XM029: Free-text expression, line 0: Invalid character in free-text search expression, it may not appear outside quoted string at �SPARQL query:define sql:big-data-const 0 define output:format "HTTP+TTL text/turtle" CONSTRUCT {  ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2004/02/skos/core#Concept>.  ?uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?schema_name.  ?uri <http://www.w3.org/2004/02/skos/core#altLabel> ?schema_alternateName.  ?uri <http://www.w3.org/2004/02/skos/core#scopeNote> ?schema_description.}WHERE {  {    ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ?type.    VALUES ?type {      <http://schema.org/Person>      <http://schema.org/Organization>    }    ?uri ?name ?label.    VALUES ?name {      <http://schema.org/name>      <http://schema.org/alternateName>    }    OPTIONAL { ?uri <http://schema.org/name> ?schema_name. }    OPTIONAL { ?uri <http://schema.org/alternateName> ?schema_alternateName. }    OPTIONAL { ?uri <http://schema.org/description> ?schema_description. }    FILTER(<bif:contains>(?label, REPLACE(REPLACE("Joan Miró", "[.,]", " "), "(?<!AND)(?<!OR)[[:space:]]+(?!AND)(?!OR)(?!$)(?![[:space:]])", " AND ", "i")))  }}LIMIT 1000

It looks like terms with diacritical marks must be quoted when passed to bif:contains: manually testing with url -H "Content-Type: application/sparql-query" -d '@/tmp/query.txt' https://api.data.netwerkdigitaalerfgoed.nl/datasets/rkd/rkdartists/services/rkdartists/sparql and the following query:

CONSTRUCT
  {    ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2004/02/skos/core#Concept>.
    ?uri <http://www.w3.org/2004/02/skos/core#prefLabel> ?schema_name.
    ?uri <http://www.w3.org/2004/02/skos/core#altLabel> ?schema_alternateName.
    ?uri <http://www.w3.org/2004/02/skos/core#scopeNote> ?schema_description.
  }
WHERE
  {
    {
      ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ?type.
      VALUES ?type {      <http://schema.org/Person>      <http://schema.org/Organization>    }
      ?uri ?name ?label.
      VALUES ?name {      <http://schema.org/name>      <http://schema.org/alternateName>    }
      OPTIONAL { ?uri <http://schema.org/name> ?schema_name. }
      OPTIONAL { ?uri <http://schema.org/alternateName> ?schema_alternateName. }
      OPTIONAL { ?uri <http://schema.org/description> ?schema_description. }
      FILTER(<bif:contains>(?label, "Joan AND 'Miró'"))
    }
  }
LIMIT 1000

does work, while without the quotes it produces the error similar to the one above.

Authentication for terminology sources

The new GTAA endpoint will require an authentication key. Currently, the Network of Terms doesn’t support authenticated endpoints.

@wmelder Can you elaborate a bit on what that authentication will look like? A single API key in an HTTP header or something else?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.