Giter Club home page Giter Club logo

atomgraph / linkeddatahub Goto Github PK

View Code? Open in Web Editor NEW
468.0 16.0 118.0 115.29 MB

The low-code Knowledge Graph application platform. Apache license.

Home Page: https://atomgraph.github.io/LinkedDataHub/

License: Apache License 2.0

Shell 11.38% XSLT 45.48% Dockerfile 0.20% Java 36.50% JavaScript 2.58% CSS 3.38% HTML 0.49%
rdf sparql linked-data linked-open-data knowledge-graph declarative triplestore semantic-web xslt ontology-driven-development

linkeddatahub's Introduction

The low-code Knowledge Graph application platform

LinkedDataHub (LDH) is open source software you can use to manage data, create visualizations and build apps on RDF Knowledge Graphs.

LinkedDataHub screenshots

What's new in LinkedDataHub v3? Watch this video for a feature overview: What's new in LinkedDataHub v3? Feature overview

We started the project with the intention to use it for Linked Data publishing, but gradually realized that we've built a multi-purpose data-driven platform.

We are building LinkedDataHub primarily for:

  • researchers who need an RDF-native FAIR data environment that can consume and collect Linked Data and SPARQL documents and follows the FAIR principles
  • developers who are looking for a declarative full stack framework for Knowledge Graph application development, with out-of-the-box UI and API

What makes LinkedDataHub unique is its completely data-driven architecture: applications and documents are defined as data, managed using a single generic HTTP API and presented using declarative technologies. The default application structure and user interface are provided, but they can be completely overridden and customized. Unless a custom server-side processing is required, no imperative code such as Java or JavaScript needs to be involved at all.

Follow the Get started guide to LinkedDataHub. The setup and basic configuration sections are provided below and should get you running.

LinkedDataHub is also available as a free AWS Marketplace product! AWS Marketplace
It takes a few clicks and filling out a form to install the product into your own AWS account. No manual setup or configuration necessary!

Setup

Click to expand

Prerequisites

Steps

  1. Fork this repository and clone the fork into a folder
  2. In the folder, create an .env file and fill out the missing values (you can use .env_sample as a template). For example:
    COMPOSE_CONVERT_WINDOWS_PATHS=1
    COMPOSE_PROJECT_NAME=linkeddatahub
    
    PROTOCOL=https
    HTTP_PORT=81
    HTTPS_PORT=4443
    HOST=localhost
    ABS_PATH=/
    
    [email protected]
    OWNER_GIVEN_NAME=John
    OWNER_FAMILY_NAME=Doe
    OWNER_ORG_UNIT=My unit
    OWNER_ORGANIZATION=My org
    OWNER_LOCALITY=Copenhagen
    OWNER_STATE_OR_PROVINCE=Denmark
    OWNER_COUNTRY_NAME=DK
    
  3. Setup SSL certificates/keys by running this from command line (replace $owner_cert_pwd and $secretary_cert_pwd with your own passwords):
    ./scripts/setup.sh .env ssl $owner_cert_pwd $secretary_cert_pwd 3650
    
    The script will create an ssl sub-folder where the SSL certificates and/or public keys will be placed.
  4. Launch the application services by running this from command line:
    docker-compose up --build
    
    It will build LinkedDataHub's Docker image, start its container and mount the following sub-folders:
    • data where the triplestore(s) will persist RDF data
    • uploads where LDH stores content-hashed file uploads The first should take around half a minute as datasets are being loaded into triplestores. After a successful startup, the last line of the Docker log should read something like:
    linkeddatahub_1     | 09-Feb-2021 14:18:10.536 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [32609] milliseconds
    
  5. Install ssl/owner/keystore.p12 into a web browser of your choice (password is the $owner_cert_pwd value supplied to setup.sh)
    • Google Chrome: Settings > Advanced > Manage Certificates > Import...
    • Mozilla Firefox: Options > Privacy > Security > View Certificates... > Import...
    • Apple Safari: The file is installed directly into the operating system. Open the file and import it using the Keychain Access tool (drag it to the local section).
    • Microsoft Edge: Does not support certificate management, you need to install the file into Windows. Read more here.
  6. Open https://localhost:4443/ in that web browser

Notes

  • There might go up to a minute before the web server is available because the nginx server depends on healthy LinkedDataHub and the healthcheck is done every 20s
  • You will likely get a browser warning such as Your connection is not private in Chrome or Warning: Potential Security Risk Ahead in Firefox due to the self-signed server certificate. Ignore it: click Advanced and Proceed or Accept the risk to proceed.
    • If this option does not appear in Chrome (as observed on some MacOS), you can open chrome://flags/#allow-insecure-localhost, switch Allow invalid certificates for resources loaded from localhost to Enabled and restart Chrome
  • .env_sample and .env files might be invisible in MacOS Finder which hides filenames starting with a dot. You should be able to create it using Terminal however.
  • On Linux your user may need to be a member of the docker group. Add it using
sudo usermod -aG docker ${USER}

and re-login with your user. An alternative, but not recommended, is to run

sudo docker-compose up

Configuration

Click to expand

Base URI

A common case is changing the base URI from the default https://localhost:4443/ to your own.

Lets use https://ec2-54-235-229-141.compute-1.amazonaws.com/linkeddatahub/ as an example. We need to split the URI into components and set them in the .env file using the following parameters:

PROTOCOL=https
HTTP_PORT=80
HTTPS_PORT=443
HOST=ec2-54-235-229-141.compute-1.amazonaws.com
ABS_PATH=/linkeddatahub/

ABS_PATH is required, even if it's just /.

Dataspaces

Dataspaces are configured in config/system-varnish.trig. Relative URIs will be resolved against the base URI configured in the .env file.

⚠️ Do not use blank nodes to identify applications or services. We recommend using the urn: URI scheme, since LinkedDataHub application resources are not accessible under their own dataspace.

Environment

LinkedDataHub supports a range of configuration options that can be passed as environment parameters in docker-compose.yml. The most common ones are:

CATALINA_OPTS
Tomcat's command line options
SELF_SIGNED_CERT
true if the server certificate is self-signed
SIGN_UP_CERT_VALIDITY
Validity of the WebID certificates of signed up users (not the owner's)
IMPORT_KEEPALIVE
The period for which the data import can keep an open HTTP connection before it times out, in ms. The larger files are being imported, the longer it has to be in order for the import to complete.
MAX_CONTENT_LENGTH
Maximum allowed size of the request body, in bytes
MAIL_SMTP_HOST
Hostname of the mail server
MAIL_SMTP_PORT
Port number of the mail server
GOOGLE_CLIENT_ID
OAuth 2.0 Client ID from Google. When provided, enables the Login with Google authentication method.
GOOGLE_CLIENT_SECRET
Client secret from Google

The options are described in more detail in the configuration documentation.

Reset

If you need to start fresh and wipe the existing setup (e.g. after configuring a new base URI), you can do that using

sudo rm -rf data uploads && docker-compose down -v

⚠️ This will remove the persisted data and files as well as Docker volumes.

LinkedDataHub CLI wraps the HTTP API into a set of shell scripts with convenient parameters. The scripts can be used for testing, automation, scheduled execution and such. It is usually much quicker to perform actions using CLI rather than the user interface, as well as easier to reproduce.

The scripts can be found in the scripts subfolder.

⚠️ The CLI scripts internally use Jena's CLI commands. Set up the Jena environment before running the scripts.

An environment variable JENA_HOME is used by all the command line tools to configure the class path automatically for you. You can set this up as follows:

On Linux / Mac

export JENA_HOME=the directory you downloaded Jena to
export PATH="$PATH:$JENA_HOME/bin"

Sample applications

Third party

  • KGDN - an open-source, collaborative project documenting RDF Knowledge Graph technologies, including RDF, SPARQL, OWL, and SHACL
  • LDH Uploader - a collection of shell scripts used to upload files or directory of files to a LinkedDataHub instance by @tmciver

These demo applications can be installed into a LinkedDataHub instance using the provided CLI scripts.

⚠️ Before running app installation scripts that use LinkedDataHub's CLI scripts, set the SCRIPT_ROOT environmental variable to the scripts subfolder of your LinkedDataHub fork or clone. For example:

export SCRIPT_ROOT="/c/Users/namedgraph/WebRoot/AtomGraph/LinkedDataHub/scripts"

How to get involved

Test suite

LinkedDataHub includes an HTTP test suite. The server implementation is also covered by the Processor test suite.

HTTP-tests HTTP-tests

Dependencies

Browser

Java

Docker

Support

Please report issues if you've encountered a bug or have a feature request.

Commercial consulting, development, and support are available from AtomGraph.

Community

linkeddatahub's People

Contributors

disordered avatar labra avatar namedgraph avatar tmciver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linkeddatahub's Issues

"Get started / Setup guide" incomplete in Documentation

When i launched with docker, linkeddatahub container crashes with error Admin base URI and/or admin quad store could not be extracted from /WEB-INF/classes/com/atomgraph/linkeddatahub/system.trig for root app. remaining other containers started properly.

Container responses get cached

After creating a child document, container view is not updated. Varnish returns a stale cached response.

There used to be code that BANs (removes from Varnish cache) all objects with container URL after a child document is created?

Container ordering

We need a mechanism to specify children ordering in containers. Currently it is controlled using ORDER BY in the container's SELECT query, and while that works correctly, the ordering is generally lost after we wrap the query into DESCRIBE and retrieve a graph, which is unordered. Therefore a secondary sort in XSLT needs to be done, and to be able to sort by properties and their values, we need to know the URI of the property used as the sort key.

url format question

Using https://linkeddatahub.com/proxml/test/?filterRegex=label&uri=http://topbraid.org/examples/kennedys#AlfredTucker
I get the full kennedys dataset as result.
Using https://linkeddatahub.com/demo/iswc-2017/?filterRegex=label&uri=https%3A%2F%2Fw3id.org%2Fscholarlydata%2Fperson%2Fmartin-voigt
I get only info on martin-voight.
Is this different behaviour due to the difference in # versus / url's. Or are different templates triggered? If yes, how can I find this out in the most efficient way?

Failed to run properly on Ubuntu + Docker

Failing to start docker-compose for repository on Ubuntu 18.04 host system.

Docker version 18.09.7, build 2d0083d

Steps to reproduce:

  1. Clone repository as of f6262e5 commit
  2. Create .env with credentials (leave BASE_URI=https://localhost:4443/)
  3. Run docker-compose up
  4. Witness following logs:
sudo docker-compose up
Creating linkeddatahub_email-server_1 ...
Creating linkeddatahub_nginx_1 ...
Creating linkeddatahub_fuseki-end-user_1 ...
Creating linkeddatahub_linkeddatahub_1 ...
Creating linkeddatahub_email-server_1
Creating linkeddatahub_fuseki-admin_1 ...
Creating linkeddatahub_nginx_1
Creating linkeddatahub_fuseki-end-user_1
Creating linkeddatahub_linkeddatahub_1
Creating linkeddatahub_nginx_1 ... done
Attaching to linkeddatahub_email-server_1, linkeddatahub_fuseki-admin_1, linkeddatahub_linkeddatahub_1, linkeddatahub_fuseki-end-user_1, linkeddatahub_nginx_1
email-server_1     | + sed -ri '
email-server_1     |    s/^#?(dc_local_interfaces)=.*/\1='\''[0.0.0.0]:25 ; [::0]:25'\''/;
email-server_1     |    s/^#?(dc_other_hostnames)=.*/\1='\'''\''/;
email-server_1     |    s/^#?(dc_relay_nets)=.*/\1='\''172.23.0.4\/16'\''/;
email-server_1     |    s/^#?(dc_eximconfig_configtype)=.*/\1='\''internet'\''/;
email-server_1     | ' /etc/exim4/update-exim4.conf.conf
email-server_1     | + update-exim4.conf -v
email-server_1     | using non-split configuration scheme from /etc/exim4/exim4.conf.template
email-server_1     |     1 LOG: MAIN
email-server_1     |     1   exim 4.92 daemon started: pid=1, -q15m, listening for SMTP on port 25 (IPv4)
fuseki-admin_1     | Starting temporary server
fuseki-admin_1     | Temporary server started.
linkeddatahub_1    | ### Generating server certificate
fuseki-end-user_1  | Starting temporary server
fuseki-end-user_1  | Temporary server started.
nginx_1            | ### Waiting for linkeddatahub...
nginx_1            | ### linkeddatahub responded
fuseki-admin_1     | http://localhost:3333/ds/ not responding, exiting...
linkeddatahub_fuseki-admin_1 exited with code 1
fuseki-end-user_1  | http://localhost:3333/ds/ not responding, exiting...
linkeddatahub_fuseki-end-user_1 exited with code 1
linkeddatahub_1    |
linkeddatahub_1    | ### Quad store URL of the root admin service: http://fuseki-admin:3030/ds/
linkeddatahub_1    |
linkeddatahub_1    | ### Secretary's WebID URI: https://localhost:4443/admin/acl/agents/e413f97b-15ee-47ea-ba65-4479aa7f1f9e/#this
linkeddatahub_1    |
linkeddatahub_1    | ### Secretary WebID certificate's DName attributes: CN=LinkedDataHub,OU=LinkedDataHub,O=AtomGraph,L=Copenhagen,ST=Denmark,C=DK
linkeddatahub_1    |
linkeddatahub_1    | ### Secretary WebID certificate's modulus: c33d3ab2873ed78...
linkeddatahub_1    | ### Waiting for http://fuseki-admin:3030/ds/...
linkeddatahub_1    | ### URL http://fuseki-admin:3030/ds/ not responding after 20 seconds, exiting...
linkeddatahub_linkeddatahub_1 exited with code 

I'm not sure where does localhost:3333 comes from.

After consequent launch attempts the only thing changed is one more error line in the logs:

linkeddatahub_1    | keytool error: java.lang.Exception: Key pair not generated, alias <ldh> already exists

Chart layout mode does not work on empty containers?

E.g. when looking at Root in Graph mode, ac:SVG mode throws an error in Saxon-CE:

SEVERE: XPathException in mode: '{http://saxonica.com/ns/interactiveXSLT}onclick' event: '[object MouseEvent]: An empty sequence is not allowed as the value of variable $min-x

Check that certificate password is ASCII

Apparently keytool (which we use during signup to generate WebID cert) does not accept non-ASCII characters:

stage.linkeddatahub-prod_1       | keytool error: java.security.KeyStoreException: Key protection  
algorithm not found: java.security.UnrecoverableKeyException: Encrypt Private Key failed: getSecretKey failed: Password is not ASCII

HTTP test for creating Restrictions

Right now creating a Restriction instance such as TopicOfConceptItem results in a 500 Internal Server Error cause by an exception in the SkolemizingDatasetProvider:

org.glassfish.jersey.server.internal.process.MappableException: org.apache.jena.ontology.ConversionException: Cannot convert node http://www.w3.org/2002/07/owl#Restriction to OntClass: it does not have rdf:type owl:Class or equivalent

Can be reproduce using the SKOS demo app.

SPARQL result caching causes 404 Not Found

I've created an LDH app that adds document data to a container. I can see the container and links to the resources in the container, but when I click on the links to the resources, I get a 404. Stopping and then restarting the containers (with docker-compose down followed by docker-compose up -d fixes the issue. @namedgraph recommended this remedy after suspecting a caching issue.

Application not found + Ontology cannot be null errors while attempting to access service launched on Ubuntu host

Setup is basically the same as in the previous issue. All docker based services seem to start normally (the only issue is service startup time Server startup in 339410 ms, not sure why it is so big and whether it's fine) but when one tries to access the launched app via browser (Firefox) with generated owner.p12 certificate, the following error occurs and browser ending up with displaying a blank page.

Docker logs:

Attaching to linkeddatahub_fuseki-admin_1, linkeddatahub_linkeddatahub_1, linkeddatahub_fuseki-end-user_1, linkeddatahub_nginx_1, linkeddatahub_email-server_1
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO  Apache Jena Fuseki 3.13.0
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO  Configuration file /var/fuseki/config.ttl
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO  Path = /ds; Services = [""=>gsp-rw, ""=>query, ""=>update]
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO    Memory: 483.4 MiB
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO    Java:   1.8.0_111
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO    OS:     Linux 4.15.0-55-generic amd64
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:28] Server     INFO    PID:    1
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:29] Server     INFO  Start Fuseki (port=3030)
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:36] Fuseki     INFO  [1] GET http://fuseki-admin:3030/ds/
[33mlinkeddatahub_1    |[0m ### Generating server certificate
[33mlinkeddatahub_1    |[0m 
[33mlinkeddatahub_1    |[0m ### Quad store URL of the root admin service: http://fuseki-admin:3030/ds/
[33mlinkeddatahub_1    |[0m 
[33mlinkeddatahub_1    |[0m ### Loading default datasets into the end-user/admin triplestores...
[33mlinkeddatahub_1    |[0m ### URL http://fuseki-end-user:3030/ds/ responded
[33mlinkeddatahub_1    |[0m   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
[33mlinkeddatahub_1    |[0m                                  Dload  Upload   Total   Spent    Left  Speed
[33mlinkeddatahub_1    |[0m 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 46107    0    64  100 46043     90  64932 --:--:-- --:--:-- --:--:-- 64940
100 46108    0    65  100 46043     90  64348 --:--:-- --:--:-- --:--:-- 64305
[33mlinkeddatahub_1    |[0m { 
[33mlinkeddatahub_1    |[0m   "count" : 207 ,
[33mlinkeddatahub_1    |[0m   "tripleCount" : 0 ,
[33mlinkeddatahub_1    |[0m   "quadCount" : 207
[33mlinkeddatahub_1    |[0m }
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO  Apache Jena Fuseki 3.13.0
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO  Configuration file /var/fuseki/config.ttl
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO  Path = /ds; Services = [""=>gsp-rw, ""=>query, ""=>update]
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO    Memory: 483.4 MiB
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO    Java:   1.8.0_111
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO    OS:     Linux 4.15.0-55-generic amd64
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:28] Server     INFO    PID:    1
[35mnginx_1            |[0m ### Waiting for linkeddatahub...
[35mnginx_1            |[0m ### linkeddatahub responded
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:29] Server     INFO  Start Fuseki (port=3030)
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:34] Fuseki     INFO  [1] GET http://fuseki-end-user:3030/ds/
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:35] Fuseki     INFO  [1] 200 OK (474 ms)
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:35] Fuseki     INFO  [2] GET http://fuseki-end-user:3030/ds/
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:35] Fuseki     INFO  [2] 200 OK (186 ms)
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:35] Fuseki     INFO  [3] POST http://fuseki-end-user:3030/ds/
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:35] Fuseki     INFO  [3] Body: Content-Length=46043, Content-Type=application/n-quads, Charset=null => N-Quads : Count=207 Triples=0 Quads=207
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:36] Fuseki     INFO  [3] 200 OK (701 ms)
[31memail-server_1     |[0m + sed -ri '
[31memail-server_1     |[0m 	s/^#?(dc_local_interfaces)=.*/\1='\''[0.0.0.0]:25 ; [::0]:25'\''/;
[31memail-server_1     |[0m 	s/^#?(dc_other_hostnames)=.*/\1='\'''\''/;
[31memail-server_1     |[0m 	s/^#?(dc_relay_nets)=.*/\1='\''172.19.0.3\/16'\''/;
[31memail-server_1     |[0m 	s/^#?(dc_eximconfig_configtype)=.*/\1='\''internet'\''/;
[31memail-server_1     |[0m ' /etc/exim4/update-exim4.conf.conf
[31memail-server_1     |[0m + update-exim4.conf -v
[31memail-server_1     |[0m using non-split configuration scheme from /etc/exim4/exim4.conf.template
[31memail-server_1     |[0m     1 LOG: MAIN
[31memail-server_1     |[0m     1   exim 4.92 daemon started: pid=1, -q15m, listening for SMTP on port 25 (IPv4)
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:40] Fuseki     INFO  [1] 200 OK (3.585 s)
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:40] Fuseki     INFO  [2] GET http://fuseki-admin:3030/ds/
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:42] Fuseki     INFO  [2] 200 OK (2.332 s)
[33mlinkeddatahub_1    |[0m ### URL http://fuseki-admin:3030/ds/ responded
[33mlinkeddatahub_1    |[0m   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
[33mlinkeddatahub_1    |[0m                                  Dload  Upload   Total   Spent    Left  Speed
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:42] Fuseki     INFO  [3] POST http://fuseki-admin:3030/ds/
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:43] Fuseki     INFO  [3] Body: Content-Length=547638, Content-Type=application/n-quads, Charset=null => N-Quads : Count=2392 Triples=0 Quads=2392
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:43] Fuseki     INFO  [3] 200 OK (1.077 s)
[33mlinkeddatahub_1    |[0m 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  534k    0     0  100  534k      0   527k  0:00:01  0:00:01 --:--:--  527k
100  534k    0    67  100  534k     61   487k  0:00:01  0:00:01 --:--:--  487k
[33mlinkeddatahub_1    |[0m { 
[33mlinkeddatahub_1    |[0m   "count" : 2392 ,
[33mlinkeddatahub_1    |[0m   "tripleCount" : 0 ,
[33mlinkeddatahub_1    |[0m   "quadCount" : 2392
[33mlinkeddatahub_1    |[0m }
[33mlinkeddatahub_1    |[0m openjdk version "1.8.0_181"
[33mlinkeddatahub_1    |[0m OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13)
[33mlinkeddatahub_1    |[0m OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[33mlinkeddatahub_1    |[0m      intx CompilerThreadStackSize                   = 0                                   {pd product}
[33mlinkeddatahub_1    |[0m     uintx ErgoHeapSizeLimit                         = 0                                   {product}
[33mlinkeddatahub_1    |[0m     uintx HeapSizePerGCThread                       = 87241520                            {product}
[33mlinkeddatahub_1    |[0m     uintx InitialHeapSize                          := 33554432                            {product}
[33mlinkeddatahub_1    |[0m     uintx LargePageHeapSizeThreshold                = 134217728                           {product}
[33mlinkeddatahub_1    |[0m     uintx MaxHeapSize                              := 524288000                           {product}
[33mlinkeddatahub_1    |[0m      intx ThreadStackSize                           = 1024                                {pd product}
[33mlinkeddatahub_1    |[0m      intx VMThreadStackSize                         = 1024                                {pd product}
[33mlinkeddatahub_1    |[0m ### Waiting for http://fuseki-end-user:3030/ds/...
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:43] Fuseki     INFO  [4] GET http://fuseki-end-user:3030/ds/
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:44] Fuseki     INFO  [4] 200 OK (173 ms)
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:44] Fuseki     INFO  [5] GET http://fuseki-end-user:3030/ds/
[32mfuseki-end-user_1  |[0m [2020-01-28 16:00:44] Fuseki     INFO  [5] 200 OK (112 ms)
[33mlinkeddatahub_1    |[0m ### URL http://fuseki-end-user:3030/ds/ responded
[33mlinkeddatahub_1    |[0m ### Waiting for http://fuseki-admin:3030/ds/...
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:44] Fuseki     INFO  [4] GET http://fuseki-admin:3030/ds/
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:46] Fuseki     INFO  [4] 200 OK (2.190 s)
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:46] Fuseki     INFO  [5] GET http://fuseki-admin:3030/ds/
[36mfuseki-admin_1     |[0m [2020-01-28 16:00:47] Fuseki     INFO  [5] 200 OK (1.391 s)
[33mlinkeddatahub_1    |[0m ### URL http://fuseki-admin:3030/ds/ responded
[33mlinkeddatahub_1    |[0m ### Waiting for nginx...
[33mlinkeddatahub_1    |[0m ### Host nginx responded
[33mlinkeddatahub_1    |[0m Listening for transport dt_socket at address: 8000
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.909 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.0.53
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.920 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Jun 29 2018 14:42:45 UTC
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.921 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.0.53.0
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.923 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            4.15.0-55-generic
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.924 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.925 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home:             /usr/lib/jvm/java-8-openjdk-amd64/jre
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.927 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version:           1.8.0_181-8u181-b13-1~deb9u1-b13
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.928 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:            Oracle Corporation
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.929 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /usr/local/tomcat
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.930 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /usr/local/tomcat
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.935 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.936 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.937 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.939 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.940 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Duser.timezone=Europe/Copenhagen
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.941 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.943 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.945 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.947 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.948 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library 1.2.17 using APR version 1.5.2.
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.950 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:49.964 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized (OpenSSL 1.1.0f  25 May 2017)
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:50.272 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-apr-8080"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:50.313 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-apr-8009"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:50.400 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8443"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:51.698 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:51.710 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 3387 ms
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:51.803 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:51.807 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.53
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:00:51.855 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor /usr/local/tomcat/conf/Catalina/localhost/ROOT.xml
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:01:00.588 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
[35mnginx_1            |[0m 195.231.4.32 - - [28/Jan/2020:16:04:54 +0000] "GET login.cgi HTTP/1.1" 400 157 "-" "-"
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:30.989 WARNING [localhost-startStop-1] org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [330,040] milliseconds.
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:31.054 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deployment of configuration descriptor /usr/local/tomcat/conf/Catalina/localhost/ROOT.xml has finished in 339,196 ms
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:31.063 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-apr-8080"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:31.107 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-apr-8009"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:31.114 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8443"]
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:06:31.122 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 339410 ms
[35mnginx_1            |[0m 161.142.234.65 [28/Jan/2020:16:14:43 +0000] TCP [] [upstream_server_https] 200 1416 7 6.556
[35mnginx_1            |[0m 161.142.234.65 [28/Jan/2020:16:14:43 +0000] TCP [] [upstream_server_https] 200 1416 7 6.315
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:14:50.068 INFO [http-nio-8443-exec-9] com.sun.jersey.server.impl.application.WebApplicationImpl._initiate Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM'
[35mnginx_1            |[0m 161.142.234.65 [28/Jan/2020:16:14:55 +0000] TCP [] [upstream_server_https] 200 1467 1422 5.953
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:15:01.694 INFO [http-nio-8443-exec-9] com.sun.jersey.server.impl.application.DeferredResourceConfig$ApplicationHolder.<init> Instantiated the Application class com.atomgraph.linkeddatahub.Application
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:15:04.309 SEVERE [http-nio-8443-exec-9] com.sun.jersey.spi.container.ContainerResponse.mapException Exception mapper com.atomgraph.server.mapper.NotFoundExceptionMapper@45cf1f83 for Throwable com.atomgraph.core.exception.NotFoundException: Application not found threw a RuntimeException when attempting to obtain the response
[33mlinkeddatahub_1    |[0m 28-Jan-2020 17:15:04.315 SEVERE [http-nio-8443-exec-9] com.sun.jersey.spi.container.ContainerResponse.logException Mapped exception to response: 500 (Internal Server Error)
[33mlinkeddatahub_1    |[0m  java.lang.IllegalArgumentException: Ontology cannot be null
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.processor.util.TemplateMatcher.match(TemplateMatcher.java:229)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.processor.util.TemplateMatcher.match(TemplateMatcher.java:137)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateProvider.getTemplate(TemplateProvider.java:77)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateProvider.getTemplate(TemplateProvider.java:72)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateProvider.getContext(TemplateProvider.java:67)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateProvider.getContext(TemplateProvider.java:37)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateCallProvider.getTemplate(TemplateCallProvider.java:97)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateCallProvider.getTemplateCall(TemplateCallProvider.java:73)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateCallProvider.getContext(TemplateCallProvider.java:68)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.provider.TemplateCallProvider.getContext(TemplateCallProvider.java:38)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.mapper.ExceptionMapperBase.getTemplateCall(ExceptionMapperBase.java:164)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.mapper.ExceptionMapperBase.getResponseBuilder(ExceptionMapperBase.java:102)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.mapper.NotFoundExceptionMapper.toResponse(NotFoundExceptionMapper.java:35)
[33mlinkeddatahub_1    |[0m 	at com.atomgraph.server.mapper.NotFoundExceptionMapper.toResponse(NotFoundExceptionMapper.java:29)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.spi.container.ContainerResponse.mapException(ContainerResponse.java:480)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1479)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
[33mlinkeddatahub_1    |[0m 	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
[33mlinkeddatahub_1    |[0m 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
[33mlinkeddatahub_1    |[0m 	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
[33mlinkeddatahub_1    |[0m 	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
[33mlinkeddatahub_1    |[0m 	at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152)
[33mlinkeddatahub_1    |[0m 	at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
[33mlinkeddatahub_1    |[0m 	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539)
[33mlinkeddatahub_1    |[0m 	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495)
[33mlinkeddatahub_1    |[0m 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[33mlinkeddatahub_1    |[0m 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[33mlinkeddatahub_1    |[0m 	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
[33mlinkeddatahub_1    |[0m 	at java.lang.Thread.run(Thread.java:748)
[33mlinkeddatahub_1    |[0m 
[35mnginx_1            |[0m 161.142.234.65 [28/Jan/2020:16:15:04 +0000] TCP [] [upstream_server_https] 200 1730 1789 15.191

PUT changes dct:created value

PUT request should only add (update?) dct:modified value, leaving dct:created intact. Right now the dct:created value is updated instead.

Signup - 500 error on create context link

After signing up, Email says:

When you're ready, proceed to create a context for your Linked Data applications by following this link: https://linkeddatahub.com/docs/create?forClass=https%3A%2F%2Flinkeddatahub.com%2Fdocs%2Fns%23Context
Read more about LinkedDataHub contexts here: https://linkeddatahub.com/docs/manage-apps#contexts

But this fails with:

HTTP Status 500 - com.sun.jersey.api.container.ContainerException: Unable to create resource class com.atomgraph.platform.server.impl.ResourceBase

root cause
...
com.atomgraph.processor.exception.ParameterException: Parameter 'forClass' not supported by Template '[<http://atomgraph.com/ns/platform/templates#Item>: "(.*)", 0.0]'
    com.atomgraph.processor.util.TemplateCall.applyArguments(TemplateCall.java:90)

Correct link should probably be https://linkeddatahub.com/create?forClass=https%3A%2F%2Flinkeddatahub.com%2Fns%23Context

A UI trail of facet/parallax actions

Facet and parallax navigation generates a new container state with each action, the user should be able to go back to a previous state. Right now the only recourse is to refresh the page and go to the initial container state.

It should also be possible to generate URL for a state/restore state from a URL.

Insufficient Java memory

During installation of https://linkeddatahub.com:4443/demo/city-graph/:

There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 39321600 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/tomcat/hs_err_pid39.log

Signup validation

Some comments from a test user:

  • "Personal mailbox" - not clear that it's email address
  • email address without @ was accepted

CLI scripts for updating resources

Currently we only provide create CLI scripts for resources such as services, queries, charts etc. That results in duplicate resources with different URIs being created with each app install run, and means that resource metadata cannot be updated.

We do have update-document script for documents which updates them using a PUT request. We should use this approach for resources as well, and provide update-* scripts in addition to create-*.

WebID cache with expiration

getSystem().getWebIDModelCache() optimizes authentication by avoiding having to load Agent data with each authentication. It should have an expiration period however. Similarly, we need to introduce getSystem().getOIDCModelCache().

One possible solution for this is using Guava's CacheBuilder.

Handle 403 during form POST

Currently, if the agent is authenticated but not authorized, it can still see "Create", "Edit" buttons etc. and can see the forms.
When "Save" is clicked, the server returns 403 Not Authorized, but the UI silently swallows it.
Fix by showing an error popup.
Alternatively, don't show the form controls in the first place?

Saving LD resource or query result

Add a [Save] button in LD browser mode and SPARQL query editor to save the current RDF graph.

We don't want to render hidden RDF/POST forms with a lot of data, so this needs some client-side logic. The prerequiste is probably that we load the data on the client-side in the first place, which is currently not the case for the LD browser.

Does it make sense to provide a container choice? We would need to attach all documents to it, but some resources might remain disconnected.

Maybe we need to store it in a "user-space" named graph and provide access to it through a new Graphs container, while keeping system named graphs hidden(?).

Imports can cause concurrent write requests to the triplestore

LinkedDataHub has no problem sending concurrent streams of outbound RDF data, but it turns out the Graph Store Protocol fails on concurrent write (POST, PUT, DELETE) requests. At least Dydra fails, Fuseki needs to be further tested.

The solution would be to queue outbound requests in ImportListener. Possibly as easy as turning down the number of Executor threads to 1.

Timezones get lost in date/time rendering

After 3 edits, dct:created value is shown as being later than dct:modified:

Date Created
    21 August 2020 12:11
Date Modified
    21 August 2020 10:13
    21 August 2020 10:18
    21 August 2020 10:27

However in the RDF data the values correct, only the timezones are different (GMT+2 and GMT (Z)):

<http://purl.org/dc/terms/created>
    "2020-08-21T12:11:10.412+02:00"^^<http://www.w3.org/2001/XMLSchema#dateTime> ;
<http://purl.org/dc/terms/modified>
    "2020-08-21T10:27:31.125Z"^^<http://www.w3.org/2001/XMLSchema#dateTime> , "2020-08-21T10:18:24.954Z"^^<http://www.w3.org/2001/XMLSchema#dateTime> , "2020-08-21T10:13:28.324Z"^^<http://www.w3.org/2001/XMLSchema#dateTime> ;

Linked Data client fails to read XML resources

Returns 500 Internal Server Error due to

A message body reader for Java class org.apache.jena.rdf.model.Model, and Java type interface org.apache.jena.rdf.model.Model, and MIME media type text/xml was not found

Layout tabs should be visible in item view

Right now layout mode controls are only shown in the Container view. There's no reason why some of them (at least ac:MapMode and ac:GraphMode couldn't also be shown in Item view.

Leverage browser caching of static resources

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.