Comments (11)
Also I wanted to add that HDT files are on a different server and mounted on the docker host. The mounted folder is used as volume by qEndpoint docker.
This is not a good idea for a database, basically the bottleneck is the speed of transfer between the 2 servers. Do you have another option for this?
from qendpoint.
Hi,
maybe to start with, what are simplest queries for you? Do they run over the public endpoint? Did you try with our HDTs? We are making modifications to HDT, but currently they should not have a big impact ...
Salut
D063520
from qendpoint.
Thanks for the quick reply.
The queries are this simple and they run instantly on Wikidata endpoint:
PREFIX wd: <http://www.wikidata.org/entity/>
SELECT * WHERE {wd:Q84 ?s ?p .}
No, I have not tries your HDT files but before downloading, I wanted to ask and make sure it is the only issue.
Regards, Amin
from qendpoint.
By checking your logs (http://crispy.ai.wu.ac.at/actuator/logfile) it seems that you have a 5s timeout limit.
10:18:25.710 [http-nio-1234-exec-7] INFO c.t.q.compiler.SparqlRepository - Running given sparql query: PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX cc: <http://creativecommons.org/ns#>
PREFIX dct: <http://purl.org/dc/terms/>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>
PREFIX ontolex: <http://www.w3.org/ns/lemon/ontolex#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX p: <http://www.wikidata.org/prop/>
PREFIX pq: <http://www.wikidata.org/prop/qualifier/>
PREFIX pqn: <http://www.wikidata.org/prop/qualifier/value-normalized/>
PREFIX pqv: <http://www.wikidata.org/prop/qualifier/value/>
PREFIX pr: <http://www.wikidata.org/prop/reference/>
PREFIX prn: <http://www.wikidata.org/prop/reference/value-normalized/>
PREFIX prov: <http://www.w3.org/ns/prov#>
PREFIX prv: <http://www.wikidata.org/prop/reference/value/>
PREFIX ps: <http://www.wikidata.org/prop/statement/>
PREFIX psn: <http://www.wikidata.org/prop/statement/value-normalized/>
PREFIX psv: <http://www.wikidata.org/prop/statement/value/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX schema: <http://schema.org/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX wdata: <http://www.wikidata.org/wiki/Special:EntityData/>
PREFIX wdno: <http://www.wikidata.org/prop/novalue/>
PREFIX wdref: <http://www.wikidata.org/reference/>
PREFIX wds: <http://www.wikidata.org/entity/statement/>
PREFIX wdtn: <http://www.wikidata.org/prop/direct-normalized/>
PREFIX wdv: <http://www.wikidata.org/value/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT *
WHERE {
?city wdt:P31/wdt:P279* wd:Q515 .
?city wdt:P1082 ?pop .
FILTER (?pop > 1000)
}
ORDER BY DESC(?pop)
10:18:30.779 [http-nio-1234-exec-7] ERROR c.t.q.compiler.SparqlRepository - This exception was caught [org.eclipse.rdf4j.query.QueryEvaluationException: com.the_qa_company.qendpoint.store.exception.EndpointTimeoutException
10:18:25.710
10:18:30.779
Maybe you can try to increase the timeout value. The way we compute the results isn't the same as the one used by Wikidata (Blazegraph) so some queries might not be as fast. For example it seems to take 30s to answer your query.
![image](https://private-user-images.githubusercontent.com/1580223/286225380-a1d10535-e5ff-490a-9464-fbe8d3cf2d6e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkyMDExMDAsIm5iZiI6MTcxOTIwMDgwMCwicGF0aCI6Ii8xNTgwMjIzLzI4NjIyNTM4MC1hMWQxMDUzNS1lNWZmLTQ5MGEtOTQ2NC1mYmU4ZDNjZjJkNmUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDYyNCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA2MjRUMDM0NjQwWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9OTRiMGMzNWVkZWI1NmRmZWQyYmIwZTgzNmU0Mjc3OGE0ODI5ZTU2MGU1MDU0YzFhMzBjZWU5NzFmYzk4MDBlMyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.dsKoRQ9cOAmKJzUTspRAiFvYUyjAYT5kVkE25nyzbQE)
from qendpoint.
30 seconds for a simple triple pattern with fixed subject is not normal .....
![Screenshot 2023-11-28 at 12 00 27](https://private-user-images.githubusercontent.com/10883585/286227147-9fd89929-27a3-4839-8837-d8624f106814.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkyMDExMDAsIm5iZiI6MTcxOTIwMDgwMCwicGF0aCI6Ii8xMDg4MzU4NS8yODYyMjcxNDctOWZkODk5MjktMjdhMy00ODM5LTg4MzctZDg2MjRmMTA2ODE0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI0VDAzNDY0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWZjNmZjMGYyZTc4MmIxMDBjNjE0ZTU1MGU3YzliOGY4M2I5OTJiMWYwNTE2ZTY3NGZhODZkODQyZDVhN2EzOWYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.bCAmQHR8duOtwUQYzqhc6o4WWCDKkSATFUVrgwhG_nU)
this is a bit a different setup, but also with an HDT and qendpoint
from qendpoint.
True, I didn't notice the 1k results. Can we know more about the hardware @anjomshoaa? (Disk/CPU)
Also can you give us the header of the HDT?
# PowerShell (Windows)
gc -TotalCount 19 your-hdt.hdt
# bash (Linux/MacOs)
head -n 19 your-hdt.hdt
from qendpoint.
Thanks, the city population query was adopted from one of qEndpoint papers:
Willerval, A., Bonifati, A., & Diefenbach, D. (2023, April). qEndpoint: A Wikidata SPARQL endpoint on commodity hardware. In Companion Proceedings of the ACM Web Conference 2023 (pp. 119-122).
Also regarding the wd:Q84 query: if you insist querying after 4-5 times the results appear and then it will be fast for repeated queries. However, if you look for another entity it is slow again :-(
from qendpoint.
True, I didn't notice the 1k results. Can we know more about the hardware @anjomshoaa? (Disk/CPU)
Also can you give us the header of the HDT?
# PowerShell (Windows) gc -TotalCount 19 your-hdt.hdt # bash (Linux/MacOs) head -n 19 your-hdt.hdt
Sure, here is the HDT header:
$HDT<http://purl.org/HDT/hdt#HDTv1>v5$HDTntripleslength=1841;?_:statistics <http://purl.org/HDT/hdt#originalSize> "2666797479600" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://purl.org/HDT/hdt#Dataset> .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdfs.org/ns/void#Dataset> .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://rdfs.org/ns/void#triples> "18090254954" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://rdfs.org/ns/void#properties> "51024" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://rdfs.org/ns/void#distinctSubjects> "1931121124" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://rdfs.org/ns/void#distinctObjects> "3320482641" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://purl.org/HDT/hdt#formatInformation> "_:format" .
_:format <http://purl.org/HDT/hdt#dictionary> "_:dictionary" .
_:format <http://purl.org/HDT/hdt#triples> "_:triples" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://purl.org/HDT/hdt#statisticalInformation> "_:statistics" .
<file://[/backup_nfs/hdt-pipeline/download/latest-all.nt]> <http://purl.org/HDT/hdt#publicationInformation> "_:publicationInformation" .
_:publicationInformation <http://purl.org/dc/terms/issued> "2023-05-16T18:45Z" .
_:dictionary <http://purl.org/dc/terms/format> <http://purl.org/HDT/hdt#dictionaryFour> .
_:dictionary <http://purl.org/HDT/hdt#dictionarynumSharedSubjectObject> "1738683562" .
_:triples <http://purl.org/dc/terms/format> <http://purl.org/HDT/hdt#triplesBitmap> .
_:triples <http://purl.org/HDT/hdt#triplesnumTriples> "18090254954" .
_:triples <http://purl.org/HDT/hdt#triplesOrder> "SPO" .
_:statistics <http://purl.org/HDT/hdt#hdtSize> "193188300658" .
also CPU information (on the host where docker runs) is as follows:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-4,10,15,20-22,34,37,38,40,69,90
Off-line CPU(s) list: 5-9,11-14,16-19,23-33,35,36,39,41-68,70-89,91-95
Thread(s) per core: 0
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7352 24-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2300.000
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4591.83
Virtualization: AMD-V
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 128 MiB
Thanks!
from qendpoint.
Also I wanted to add that HDT files are on a different server and mounted on the docker host. The mounted folder is used as volume by qEndpoint docker.
from qendpoint.
Thanks for the feedback. I will move the data to the hosting server then. Hopefully this will solve the problem :-)
from qendpoint.
After moving the HDT files to the same server as docker the issue with timeout is solved. Thank you for your support :-)
from qendpoint.
Related Issues (20)
- Create the other indexes HOT 1
- ORDER BY not working with activated optimizer
- The merge process isnβt handling the new indexes
- Performance of certain query patterns is degradated
- Add catdiff tool
- Failing to index Wikidata on Mac Book HOT 1
- Add delete bitmaps to additional indexes
- Port HDT-Q to the store
- Register custom sail compiler
- Compliance tests are not executed HOT 5
- CLI isn't working with MSDL
- Unordered MSDL generation with memory implementation
- Allow to use RDF callback in hdt generate functions
- Disable delete bitmap
- Loading other index with bad signature/magic isn't generating a new one
- Fast convert profiler file to CSV
- Add support for delta file generation
- Add log function to sparql
- Improve readme by adding 2 new pubblications
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qendpoint.