Giter Club home page Giter Club logo

archived-sansa-examples's Introduction

SANSA-Stack

Build Status License Twitter

This project comprises the whole Semantic Analytics Stack (SANSA). At a glance, it features the following functionality:

  • Ingesting RDF and OWL data in various formats into RDDs
  • Operators for working with RDDs and data frames of RDF data at various levels (triples, bindings, graphs, etc)
  • Transformation of RDDs to data frames and partitioning of RDDs into R2RML-mapped data frames
  • Distributed SPARQL querying over R2RML-mapped data frame partitions using RDB2RDF engines (Sparqlify & Ontop)
  • Enrichment of RDDs with inferences
  • Application of machine learning algorithms

For a detailed description of SANSA, please visit http://sansa-stack.net.

Layers

The SANSA project is structured in the following five layers developed in their respective sub-folders:

Release Cycle

A SANSA stack release is done every six months and consists of the latest stable versions of each layer at this point. This repository is used for organising those joint releases.

Usage

Spark

Requirements

We currently require a Spark 3.x.x with Scala 2.12 setup. A Spark 2.x version can be built from source based on the spark2 branch.

Release Version

Some of our dependencies are not in Maven central (yet), so you need to add following Maven repository to your project POM file repositories section:

<repository>
   <id>maven.aksw.internal</id>
   <name>AKSW Release Repository</name>
   <url>http://maven.aksw.org/archiva/repository/internal</url>
   <releases>
      <enabled>true</enabled>
   </releases>
   <snapshots>
      <enabled>false</enabled>
   </snapshots>
</repository>

If you want to import the full SANSA Stack, please add the following Maven dependency to your project POM file:

<!-- SANSA Stack -->
<dependency>
   <groupId>net.sansa-stack</groupId>
   <artifactId>sansa-stack-spark_2.12</artifactId>
   <version>$LATEST_RELEASE_VERSION$</version>
</dependency>

If you only want to use particular layers, just replace $LAYER_NAME$ with the corresponding name of the layer

<!-- SANSA $LAYER_NAME$ layer -->
<dependency>
   <groupId>net.sansa-stack</groupId>
   <artifactId>sansa-$LAYER_NAME$-spark_2.12</artifactId>
   <version>$LATEST_RELEASE_VERSION$</version>
</dependency>

SNAPSHOT Version

While the release versions are available on Maven Central, latest SNAPSHOT versions have to be installed from source code:

git clone https://github.com/SANSA-Stack/SANSA-Stack.git
cd SANSA-Stack

Then to build and install the full SANSA Spark stack you can do

./dev/mvn_install_stack_spark.sh 

or for a single layer $LAYER_NAME$ you can do

mvn -am -DskipTests -pl :sansa-$LAYER_NAME$-spark_2.12 clean install 

Alternatively, you can use the following Maven repository and add it to your project POM file repositories section:

<repository>
   <id>maven.aksw.snapshots</id>
   <name>AKSW Snapshot Repository</name>
   <url>http://maven.aksw.org/archiva/repository/snapshots</url>
   <releases>
      <enabled>false</enabled>
   </releases>
   <snapshots>
      <enabled>true</enabled>
   </snapshots>
</repository>

Then do the same as for the release version and add the dependency:

<!-- SANSA Stack -->
<dependency>
   <groupId>net.sansa-stack</groupId>
   <artifactId>sansa-stack-spark_2.12</artifactId>
   <version>$LATEST_SNAPSHOT_VERSION$</version>
</dependency>

How to Contribute

We always welcome new contributors to the project! Please see our contribution guide for more details on how to get started contributing to SANSA.

archived-sansa-examples's People

Contributors

aklakan avatar cescwang1991 avatar gezimsejdiu avatar imghasemi avatar lorenzbuehmann avatar nilesh-c avatar patrickwestphal avatar simonbin avatar tinabo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

archived-sansa-examples's Issues

mvn clean package for spark fails

โžœ  sansa-examples-spark git:(master) mvn clean package                    
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building SANSA Examples - Apache Spark 2016-12
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.836 s
[INFO] Finished at: 2017-05-09T12:25:12+02:00
[INFO] Final Memory: 25M/429M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project sansa-examples-spark: Could not resolve dependencies for project net.sansa-stack:sansa-examples-spark:jar:2016-12: Failed to collect dependencies at net.sansa-stack:sansa-ml-spark:jar:0.1.0: Failed to read artifact descriptor for net.sansa-stack:sansa-ml-spark:jar:0.1.0: Failure to find net.sansa-stack:sansa-ml-parent:pom:0.1.0 in https://oss.sonatype.org/content/repositories/snapshots/ was cached in the local repository, resolution will not be reattempted until the update interval of oss-sonatype has elapsed or updates are forced -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

Error when querying SPARQL endpoint using the sparqlify example

I have converted the pizza ontology (https://protege.stanford.edu/ontologies/pizza/pizza.owl) to n-triples using OWL api and then gave it as input to the sparqlify example. It sets up the sparql endpoint ok and generic queries execute without problem, such as:

SELECT * WHERE {
  ?s ?p ?o .
}

however when I try to search for a literal with a language tag:

PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT * WHERE {
  ?s rdfs:label "American"@en .
}

I get an error:

Exception in thread "Thread-45" java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: No SQL conversion found for NodeValue: "American"^^<http://www.w3.org/1999/02/22-rdf-syntax-ns#langString> ( http://www.w3.org/1999/02/22-rdf-syntax-ns#langString)
	at org.aksw.jena_sparql_api.web.utils.RunnableAsyncResponseSafe.run(RunnableAsyncResponseSafe.java:29)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: No SQL conversion found for NodeValue: "American"^^<http://www.w3.org/1999/02/22-rdf-syntax-ns#langString> ( http://www.w3.org/1999/02/22-rdf-syntax-ns#langString)
	at org.aksw.jena_sparql_api.web.servlets.SparqlEndpointBase$3.run(SparqlEndpointBase.java:354)
	at org.aksw.jena_sparql_api.web.utils.RunnableAsyncResponseSafe.run(RunnableAsyncResponseSafe.java:26)
	... 1 more
Caused by: java.lang.RuntimeException: No SQL conversion found for NodeValue: "American"^^<http://www.w3.org/1999/02/22-rdf-syntax-ns#langString> ( http://www.w3.org/1999/02/22-rdf-syntax-ns#langString)
	at org.aksw.sparqlify.core.cast.TypeSystemImpl.convertSql(TypeSystemImpl.java:231)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.translate(TypedExprTransformerImpl.java:766)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:268)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:320)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:277)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:320)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:277)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:320)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:277)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:320)
	at org.aksw.sparqlify.core.cast.TypedExprTransformerImpl.rewrite(TypedExprTransformerImpl.java:229)
	at org.aksw.sparqlify.util.SqlTranslatorImpl2.translate(SqlTranslatorImpl2.java:73)
	at org.aksw.sparqlify.core.algorithms.MappingOpsImpl.createExprSqlRewrites(MappingOpsImpl.java:481)
	at org.aksw.sparqlify.core.algorithms.MappingOpsImpl.createSqlConditionItems(MappingOpsImpl.java:500)
	at org.aksw.sparqlify.core.algorithms.MappingOpsImpl.createSqlCondition(MappingOpsImpl.java:522)
	at org.aksw.sparqlify.core.algorithms.MappingOpsImpl.filter(MappingOpsImpl.java:1276)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:161)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:324)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewriteList(OpMappingRewriterImpl.java:91)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:101)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:336)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:207)
	at org.aksw.sparqlify.core.algorithms.OpMappingRewriterImpl.rewrite(OpMappingRewriterImpl.java:320)
	at org.aksw.sparqlify.core.interfaces.SparqlSqlOpRewriterImpl.rewrite(SparqlSqlOpRewriterImpl.java:98)
	at org.aksw.sparqlify.core.algorithms.SparqlSqlStringRewriterImpl.rewrite(SparqlSqlStringRewriterImpl.java:44)
	at net.sansa_stack.query.spark.sparqlify.QueryExecutionSparqlifySpark.executeCoreSelect(QueryExecutionSparqlifySpark.java:39)
	at org.aksw.jena_sparql_api.core.QueryExecutionBaseSelect.execSelect(QueryExecutionBaseSelect.java:378)
	at org.aksw.jena_sparql_api.web.servlets.ProcessQuery.processQuery(ProcessQuery.java:121)
	at org.aksw.jena_sparql_api.web.servlets.ProcessQuery.processQuery(ProcessQuery.java:79)
	at org.aksw.jena_sparql_api.web.servlets.SparqlEndpointBase$3.run(SparqlEndpointBase.java:351)
	... 2 more

this seems to be valid syntax (https://www.w3.org/TR/rdf-sparql-query/#matchingRDFLiterals) so might be a bug

[SANSA-Examples-Spark] Exception in thread "main" java.lang.NoSuchMethodError

Hello all,
I'm trying to run the Sparqlify example following the running instructions

./spark-2.2.1-bin-hadoop2.7/bin/spark-submit --class net.sansa_stack.examples.spark.query.Sparqlify --master spark://spark-master:7077 SANSA-Examples/sansa-examples-spark/target/sansa-examples-spark_2.11-2017-12.1-SNAPSHOT.jar -i src/main/resources/rdf.nt

Exception in thread "main" java.lang.NoSuchMethodError: net.sansa_stack.rdf.spark.io.NTripleReader$.load$default$3()Lscala/Enumeration$Value; at net.sansa_stack.rdf.spark.io.package$RDFReader$$anonfun$ntriples$4.apply(package.scala:205) at net.sansa_stack.rdf.spark.io.package$RDFReader$$anonfun$ntriples$4.apply(package.scala:204) at net.sansa_stack.examples.spark.query.Sparqlify$.run(Sparqlify.scala:44) at net.sansa_stack.examples.spark.query.Sparqlify$.main(Sparqlify.scala:22) at net.sansa_stack.examples.spark.query.Sparqlify.main(Sparqlify.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Thanks in advance

Add example about how to run a SPARQL endpoint from SANSA

Essentially this code has to be added to an example:

        QueryExecutionFactory qef = ...; // Create qef for spark, flink or whatever

        int port = 7531;
        Server server = FactoryBeanSparqlServer.newInstance()
                .setSparqlServiceFactory(qef)
                .setPort(port)
                .create();

        if (Desktop.isDesktopSupported()) {
            Desktop.getDesktop().browse(new URI("http://localhost:" + port + "/sparql"));
        }

        server.join();

In the inference example the transitive reasoner won't work if no list of properties is provided

Running the transitive reasoner in the RDFGraphInference example one gets this error message:

Exception in thread "main" java.lang.RuntimeException: A list of properties has to be given for the transitive reasoner!
	at net.sansa_stack.inference.spark.forwardchaining.TransitiveReasoner.apply(TransitiveReasoner.scala:38)
	at net.sansa_stack.examples.spark.inference.RDFGraphInference$.main(RDFGraphInference.scala:81)
	at net.sansa_stack.examples.spark.inference.RDFGraphInference.main(RDFGraphInference.scala)

since the implementaion of the TransitiveReasoner checks whether a list of properties was provided and raises an exception, if not. Thus the call interface has to be adjusted accordingly or the transitive option needs to be dropped.

Issue when parsing large snomed ontology

I was trying to load snomed using the sparqlify example code and there was a parsing error. I think it comes from the underlying sparqlify library though. Here is the error:

Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException: 
mismatched input '184298' expecting {'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IF', 'DIV', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'USING', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', 'CURRENT_DATE', 'CURRENT_TIMESTAMP', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 0)

== SQL ==
184298
^^^

	at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
	at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
	at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
	at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseTableIdentifier(ParseDriver.scala:48)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$createTempViewCommand(Dataset.scala:2657)
	at org.apache.spark.sql.Dataset$$anonfun$createOrReplaceTempView$1.apply(Dataset.scala:2629)
	at org.apache.spark.sql.Dataset$$anonfun$createOrReplaceTempView$1.apply(Dataset.scala:2629)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2845)
	at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:2628)
	at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:62)
	at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:45)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
	at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$.createSparqlSqlRewriter(SparqlifyUtils3.scala:45)
	at net.sansa_stack.examples.spark.query.Sparklify$.run(Sparklify.scala:48)
	at net.sansa_stack.examples.spark.query.Sparklify$.main(Sparklify.scala:24)
	at net.sansa_stack.examples.spark.query.Sparklify.main(Sparklify.scala)

Process finished with exit code 1

I deleted a lot of the original ontology and this minimal file produces the same error:

<http://snomedct-20170731T150000Z> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Ontology> .
<http://snomedct-20170731T150000Z> <http://sourceName> "SNOMEDCT" .
<http://snomedct-20170731T150000Z> <http://www.w3.org/2000/01/rdf-schema#comment> "SNOMED Ontology (international)"@en .
<http://snomedct-20170731T150000Z> <http://www.w3.org/2002/07/owl#versionInfo> "20170731T150000Z"@en .
<http://snomedct-20170731T150000Z> <http://www.w3.org/2004/02/skos/core#prefLabel> "SNOMED (International)"@en .
# 
# 
# #################################################################
# #
# #    Annotation properties
# #
# #################################################################
# 
# 
# 
# http://someOnt/184298
<http://someOnt/184298> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#AnnotationProperty> .
<http://someOnt/184298> <http://www.w3.org/2004/02/skos/core#altLabel> "Finding site (attribute)"@en .
<http://someOnt/184298> <http://www.w3.org/2004/02/skos/core#prefLabel> "Finding site"@en .
# 
# 
# 
# #################################################################
# #
# #    Classes
# #
# #################################################################
# 
# 
<http://someOnt/1> <http://someOnt/184298> <http://someOnt/272277> .

It seems to be something to do with using object properties defined in the same file...

TripleOps has hardcoded <input> parameter

app_1  | Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/src/main/resources/rdf.nt
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
app_1  | 	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
app_1  | 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
app_1  | 	at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleOps$.main(TripleOps.scala:52)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleOps.main(TripleOps.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Unable to run MineRules example.

I tried running the MineRules.scala with the file MineRules_sampledata.tsv provided in the resource folder. After parsing the file, I get an exception Unable to infer schema for Parquet.
This is the complete traceback.

Exception in thread "main" org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:189)
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:189)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$getOrInferFileFormatSchema(DataSource.scala:188)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:387)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
	at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
	at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:425)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB$$anonfun$countProjectionQueriesDF$1$$anonfun$apply$7.apply(KBObject.scala:944)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB$$anonfun$countProjectionQueriesDF$1$$anonfun$apply$7.apply(KBObject.scala:906)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB$$anonfun$countProjectionQueriesDF$1.apply(KBObject.scala:906)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB$$anonfun$countProjectionQueriesDF$1.apply(KBObject.scala:905)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
	at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
	at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB.countProjectionQueriesDF(KBObject.scala:905)
	at net.sansa_stack.ml.spark.mining.amieSpark.KBObject$KB.addDanglingAtom(KBObject.scala:1686)
	at net.sansa_stack.ml.spark.mining.amieSpark.MineRules$Algorithm.refine(MineRules.scala:214)
	at net.sansa_stack.ml.spark.mining.amieSpark.MineRules$Algorithm$$anonfun$ruleMining$1$$anonfun$apply$mcVI$sp$1.apply$mcVI$sp(MineRules.scala:158)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
	at net.sansa_stack.ml.spark.mining.amieSpark.MineRules$Algorithm$$anonfun$ruleMining$1.apply$mcVI$sp(MineRules.scala:131)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
	at net.sansa_stack.ml.spark.mining.amieSpark.MineRules$Algorithm.ruleMining(MineRules.scala:73)
	at net.sansa_stack.examples.spark.ml.mining.MineRules$.main(MineRules.scala:60)
	at net.sansa_stack.examples.spark.ml.mining.MineRules.main(MineRules.scala)

Process finished with exit code 1

How should I provide a schema for the Parquet, or is there some other mechanism ?

Error when using Spark v2.2.0 without including Hadoop dependencies.

When we use Spark v2.2.0 without inclusion of Hadoop dependencies it causes this error :

Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/InputSplitWithLocationInfo
	at org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:324)
	at org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:274)
	at org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:274)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:273)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1615)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1626)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1623)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1623)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1626)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1623)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1623)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1626)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1623)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1623)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1626)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1623)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1623)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1626)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1625)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1623)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1623)
	at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1589)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$15.apply(DAGScheduler.scala:969)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$15.apply(DAGScheduler.scala:969)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:969)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:930)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:933)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:932)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:932)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:874)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1677)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.InputSplitWithLocationInfo
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 68 more

It is because SANSA-ML and SANSA-OWL uses hadoop InputFormat definitions on reading OWL/RDF/XML.

The solution is to include Hadoop dependencies here as well.

Error while running SPARQL query with property path.

Hello, I was able to setup and run SANSA, followed instructions from #25.
But, was getting following error when I try to run query with property path https://jena.apache.org/documentation/query/property_paths.html
?x cim:Terminal.ConductingEquipment/cim:Terminal.ConductingEquipment ?y

Can you please help?


val sparqlPropertyPathQ =
  """
    |prefix cim: <http://iec.ch/TC57/2012/CIM-schema-cim16#>
    |prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
    |select ?x  ?y where
    |{
    |  ?x rdf:type cim:EnergyConsumer.
    |  ?x cim:Terminal.ConductingEquipment/cim:Terminal.ConductingEquipment ?y .
    |}
  """.stripMargin

Error:
Exception in thread "main" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.aksw.commons.util.reflect.ClassUtils.forceInvoke(ClassUtils.java:38)
at org.aksw.commons.util.reflect.MultiMethod.invokeStatic(MultiMethod.java:92)
at org.aksw.jena_sparql_api.utils.ReplaceConstants.replace(ReplaceConstants.java:88)
at org.aksw.jena_sparql_api.views.CandidateViewSelectorBase.getApplicableViews(CandidateViewSelectorBase.java:478)
at org.aksw.sparqlify.core.interfaces.SparqlSqlOpRewriterImpl.rewrite(SparqlSqlOpRewriterImpl.java:81)
at org.aksw.sparqlify.core.algorithms.SparqlSqlStringRewriterImpl.rewrite(SparqlSqlStringRewriterImpl.java:44)
at net.sansa_stack.query.spark.query.package$SparqlifyAsDefault.sparql(package.scala:36)
at com.satish.scala.SansaGraphExample$.main(SansaGraphExample.scala:168)
at com.satish.scala.SansaGraphExample.main(SansaGraphExample.scala)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.aksw.commons.util.reflect.ClassUtils.forceInvoke(ClassUtils.java:35)
... 8 more
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.aksw.commons.util.reflect.ClassUtils.forceInvoke(ClassUtils.java:38)
at org.aksw.commons.util.reflect.MultiMethod.invokeStatic(MultiMethod.java:92)
at org.aksw.jena_sparql_api.utils.ReplaceConstants.replace(ReplaceConstants.java:88)
at org.aksw.jena_sparql_api.utils.ReplaceConstants._replace(ReplaceConstants.java:137)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.aksw.commons.util.reflect.ClassUtils.forceInvoke(ClassUtils.java:35)
... 16 more
Caused by: No method '_replace' found for argument types [class org.apache.jena.sparql.algebra.op.OpGraph]
at org.aksw.commons.util.reflect.MultiMethod.findMethodByParams(MultiMethod.java:187)
at org.aksw.commons.util.reflect.MultiMethod.findMethodByParamsCached(MultiMethod.java:173)
at org.aksw.commons.util.reflect.MultiMethod.findMethodByArgs(MultiMethod.java:206)
at org.aksw.commons.util.reflect.MultiMethod.invokeStatic(MultiMethod.java:90)
at org.aksw.jena_sparql_api.utils.ReplaceConstants.replace(ReplaceConstants.java:88)
at org.aksw.jena_sparql_api.utils.ReplaceConstants._replace(ReplaceConstants.java:157)
... 21 more

Spark query example fails because of missing jetty XmlParser

I followd the instructions running on spark, using the following command

../spark-2.1.1-bin-hadoop2.7/bin/spark-submit  --class net.sansa_stack.examples.spark.query.Sparklify --master spark://spark-master:7077 target/sansa-examples-spark_2.11-2017-06.1-SNAPSHOT.jar ~/all.ntriples

The execution fails because of a NoClassDefFoundError:

17/10/28 17:49:17 INFO WebInfConfiguration: Extract jar:file:/home/tr/sansa-examples/sansa-examples-spark/target/sansa-examples-spark_2.11-2017-06.1-SNAPSHOT.jar!/ to /tmp/jetty-0.0.0.0-7531-sansa-examples-spark_2.11-2017-06.1-SNAPSHOT.jar-_-any-/webapp
17/10/28 17:49:23 WARN AbstractLifeCycle: FAILED o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-7531-sansa-examples-spark_2.11-2017-06.1-SNAPSHOT.jar-_-any-/webapp/},file:/home/tr/sansa-examples/sansa-examples-spark/target/sansa-examples-spark_2.11-2017-06.1-SNAPSHOT.jar: java.lang.NoClassDefFoundError: org/eclipse/jetty/xml/XmlParser
java.lang.NoClassDefFoundError: org/eclipse/jetty/xml/XmlParser
	at org.eclipse.jetty.webapp.WebDescriptor.newParser(WebDescriptor.java:70)
	at org.eclipse.jetty.webapp.WebDescriptor.ensureParser(WebDescriptor.java:61)
	at org.eclipse.jetty.webapp.Descriptor.parse(Descriptor.java:59)
	at org.eclipse.jetty.webapp.WebDescriptor.parse(WebDescriptor.java:148)
	at org.eclipse.jetty.webapp.MetaData.setDefaults(MetaData.java:149)
	at org.eclipse.jetty.webapp.WebXmlConfiguration.preConfigure(WebXmlConfiguration.java:54)
	at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:457)
	at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:493)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
	at org.eclipse.jetty.server.Server.doStart(Server.java:282)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:56)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:48)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:40)
	at org.aksw.jena_sparql_api.server.utils.FactoryBeanSparqlServer.create(FactoryBeanSparqlServer.java:84)
	at net.sansa_stack.examples.spark.query.Sparklify$.main(Sparklify.scala:60)
	at net.sansa_stack.examples.spark.query.Sparklify.main(Sparklify.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.eclipse.jetty.xml.XmlParser
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 27 more

Is there any easy way to fix it?

RDFByModularityClustering should be able to write to HDFS

This example fails:

import scala.collection.mutable
import org.apache.spark.sql.SparkSession
import org.apache.log4j.{ Level, Logger }
import net.sansa_stack.ml.spark.clustering.RDFByModularityClustering

val graphFile = "hdfs://namenode:8020/data/Clustering_sampledata.nt"
val outputFile = "hdfs://namenode:8020/data/clustering.out"
val numIterations = 10

RDFByModularityClustering(sc, numIterations, graphFile, outputFile)

The stacktrace is:

import scala.collection.mutable
import org.apache.spark.sql.SparkSession
import org.apache.log4j.{Level, Logger}
import net.sansa_stack.ml.spark.clustering.RDFByModularityClustering
graphFile: String = hdfs://namenode:8020/data/Clustering_sampledata.nt
outputFile: String = hdfs://namenode:8020/data/clustering.out
numIterations: Int = 10
The number of nodes in the knowledge graph is 8 and the number of edges is 13.
The first ten edges of the graph look like the following: 
(<http://twitter/user0>,<http://twitter/user1>)
(<http://twitter/user0>,<http://twitter/user2>)
(<http://twitter/user0>,<http://twitter/user3>)
(<http://twitter/user1>,<http://twitter/user2>)
(<http://twitter/user1>,<http://twitter/user3>)
(<http://twitter/user1>,<http://twitter/user6>)
(<http://twitter/user2>,<http://twitter/user3>)
(<http://twitter/user3>,<http://twitter/user4>)
(<http://twitter/user4>,<http://twitter/user5>)
(<http://twitter/user5>,<http://twitter/user6>)
Starting iteration
1
2
3
4
5
6
7
java.io.FileNotFoundException: hdfs:/namenode:8020/data/clustering.out (No such file or directory)
  at java.io.FileOutputStream.open0(Native Method)
  at java.io.FileOutputStream.open(FileOutputStream.java:270)
  at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
  at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
  at java.io.PrintWriter.<init>(PrintWriter.java:263)
  at net.sansa_stack.ml.spark.clustering.RDFByModularityClustering$.apply(RDFByModularityClustering.scala:97)
  ... 52 elided

Dependency versions not found in pom.xml

$ mvn install

[ERROR] The project net.sansa-stack:sansa-examples-spark:2016-12 (/home/nilesh/sansa/SANSA-Examples/sansa-examples-spark/pom.xml) has 5 errors
[ERROR] 'dependencies.dependency.version' for net.sansa-stack:sansa-rdf-spark-bundle:jar is missing. @ line 46, column 15
[ERROR] 'dependencies.dependency.version' for net.sansa-stack:sansa-owl-spark:jar is missing. @ line 54, column 15
[ERROR] 'dependencies.dependency.version' for net.sansa-stack:sansa-inference-spark_2.11:jar is missing. @ line 60, column 15
[ERROR] 'dependencies.dependency.version' for net.sansa-stack:sansa-query-spark-bundle:jar is missing. @ line 66, column 15
[ERROR] 'dependencies.dependency.version' for net.sansa-stack:sansa-ml-spark:jar is missing. @ line 72, column 15

[SANSA-Query example] : ERROR DispatcherServlet: Context initialization failed

Hi,

this issue is happening when we try to run Sparqlify example.

ERROR SpringComponentProvider: None or multiple beans found in Spring context for type class org.aksw.jena_sparql_api.web.servlets.ServletSparqlServiceImpl, skipping the type.

For more details see the full stack :

Stack trace

18/04/20 13:11:07 ERROR SpringComponentProvider: None or multiple beans found in Spring context for type class org.aksw.jena_sparql_api.web.servlets.ServletSparqlServiceImpl, skipping the type.
18/04/20 13:11:07 INFO ROOT: Initializing Spring FrameworkServlet 'dispatcherServlet'
18/04/20 13:11:07 INFO DispatcherServlet: FrameworkServlet 'dispatcherServlet': initialization started
18/04/20 13:11:07 INFO AnnotationConfigWebApplicationContext: Refreshing WebApplicationContext for namespace 'dispatcherServlet-servlet': startup date [Fri Apr 20 13:11:07 CEST 2018]; parent: org.springframework.web.context.support.GenericWebApplicationContext@320fc4b0
18/04/20 13:11:07 INFO AnnotationConfigWebApplicationContext: Registering annotated classes: [class org.aksw.jena_sparql_api.web.server.WebMvcConfigSnorql]
18/04/20 13:11:07 INFO AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
18/04/20 13:11:07 WARN AnnotationConfigWebApplicationContext: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.context.event.internalEventListenerProcessor': Post-processing of merged bean definition failed; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
18/04/20 13:11:07 ERROR DispatcherServlet: Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.context.event.internalEventListenerProcessor': Post-processing of merged bean definition failed; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:526)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
	at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
	at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:668)
	at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:540)
	at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:494)
	at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:171)
	at javax.servlet.GenericServlet.init(GenericServlet.java:244)
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:665)
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:423)
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:760)
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
	at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1515)
	at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1477)
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
	at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:133)
	at org.eclipse.jetty.server.Server.start(Server.java:418)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
	at org.eclipse.jetty.server.Server.doStart(Server.java:385)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:56)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:48)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:40)
	at org.aksw.jena_sparql_api.server.utils.FactoryBeanSparqlServer.create(FactoryBeanSparqlServer.java:84)
	at net.sansa_stack.examples.spark.query.Sparqlify$.run(Sparqlify.scala:54)
	at net.sansa_stack.examples.spark.query.Sparqlify$.main(Sparqlify.scala:22)
	at net.sansa_stack.examples.spark.query.Sparqlify.main(Sparqlify.scala)
Caused by: java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:350)
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:992)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:523)
	... 36 more
18/04/20 13:11:07 WARN ROOT: unavailable
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.context.event.internalEventListenerProcessor': Post-processing of merged bean definition failed; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:526)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
	at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
	at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:668)
	at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:540)
	at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:494)
	at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:171)
	at javax.servlet.GenericServlet.init(GenericServlet.java:244)
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:665)
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:423)
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:760)
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
	at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1515)
	at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1477)
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
	at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:133)
	at org.eclipse.jetty.server.Server.start(Server.java:418)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
	at org.eclipse.jetty.server.Server.doStart(Server.java:385)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:56)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:48)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:40)
	at org.aksw.jena_sparql_api.server.utils.FactoryBeanSparqlServer.create(FactoryBeanSparqlServer.java:84)
	at net.sansa_stack.examples.spark.query.Sparqlify$.run(Sparqlify.scala:54)
	at net.sansa_stack.examples.spark.query.Sparqlify$.main(Sparqlify.scala:22)
	at net.sansa_stack.examples.spark.query.Sparqlify.main(Sparqlify.scala)
Caused by: java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:350)
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:992)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:523)
	... 36 more
18/04/20 13:11:07 WARN WebAppContext: Failed startup of context o.e.j.w.WebAppContext@28200d43{/,file:///tmp/jetty-0.0.0.0-7531-jena-sparql-api-server-utils-3.5.0-2.jar-_-any-8595261063220975453.dir/webapp/,UNAVAILABLE}{file:/home/gezim/.m2/repository/org/aksw/jena-sparql-api/jena-sparql-api-server-utils/3.5.0-2/jena-sparql-api-server-utils-3.5.0-2.jar}
javax.servlet.ServletException: dispatcherServlet@7ef5559e==org.springframework.web.servlet.DispatcherServlet,jsp=null,order=1,inst=false
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:686)
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:423)
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:760)
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
	at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1515)
	at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1477)
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
	at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:133)
	at org.eclipse.jetty.server.Server.start(Server.java:418)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
	at org.eclipse.jetty.server.Server.doStart(Server.java:385)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:56)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:48)
	at org.aksw.jena_sparql_api.web.server.ServerUtils.startServer(ServerUtils.java:40)
	at org.aksw.jena_sparql_api.server.utils.FactoryBeanSparqlServer.create(FactoryBeanSparqlServer.java:84)
	at net.sansa_stack.examples.spark.query.Sparqlify$.run(Sparqlify.scala:54)
	at net.sansa_stack.examples.spark.query.Sparqlify$.main(Sparqlify.scala:22)
	at net.sansa_stack.examples.spark.query.Sparqlify.main(Sparqlify.scala)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.context.event.internalEventListenerProcessor': Post-processing of merged bean definition failed; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:526)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
	at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
	at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:668)
	at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:540)
	at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:494)
	at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:171)
	at javax.servlet.GenericServlet.init(GenericServlet.java:244)
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:665)
	... 22 more
Caused by: java.lang.NoSuchMethodError: org.springframework.beans.factory.annotation.InjectionMetadata.<init>(Ljava/lang/Class;)V
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:350)
	at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:992)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:523)
	... 36 more
18/04/20 13:11:07 INFO AbstractConnector: Started ServerConnector@3a36da5e{HTTP/1.1,[http/1.1]}{0.0.0.0:7531}
18/04/20 13:11:07 INFO Server: Started @9377ms

It seems that it could be related to Sparqlifyโ€™s servlet engine, but @Aklakan could you have a look for more details how to debug this issue.

Best regards,
Gezim

Would it be possible to use SANSA-RDF & SANSA-Query in PySpark?

Hello,

I am interested in using SANSA API and I hope you can help me.

I am planning to use SANSA-RDF for reading turtle RDF/XML files into Spark and query using SANSA-Query library. Would it be possible to develop PySpark (python) code using theses libraries? Are there any PySpark (python) examples?

Thanks

PageRank example -- object not serializable

app_1  | Exception in thread "main" org.apache.spark.SparkException: Task not serializable
app_1  | 	at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
app_1  | 	at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
app_1  | 	at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
app_1  | 	at org.apache.spark.SparkContext.clean(SparkContext.scala:2101)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
app_1  | 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
app_1  | 	at org.apache.spark.rdd.RDD.map(RDD.scala:369)
app_1  | 	at net.sansa_stack.rdf.spark.model.GraphXGraphOps$class.makeGraph(GraphXGraphOps.scala:58)
app_1  | 	at net.sansa_stack.rdf.spark.model.JenaSparkGraphXOps$$anon$2.makeGraph(JenaSparkRDD.scala:43)
app_1  | 	at net.sansa_stack.examples.spark.rdf.PageRank$.main(PageRank.scala:48)
app_1  | 	at net.sansa_stack.examples.spark.rdf.PageRank.main(PageRank.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
app_1  | Caused by: java.io.NotSerializableException: net.sansa_stack.rdf.spark.model.JenaSparkGraphXOps$$anon$2
app_1  | Serialization stack:
app_1  | 	- object not serializable (class: net.sansa_stack.rdf.spark.model.JenaSparkGraphXOps$$anon$2, value: net.sansa_stack.rdf.spark.model.JenaSparkGraphXOps$$anon$2@1e1b061)
app_1  | 	- field (class: net.sansa_stack.rdf.spark.model.GraphXGraphOps$$anonfun$3, name: $outer, type: interface net.sansa_stack.rdf.spark.model.GraphXGraphOps)
app_1  | 	- object (class net.sansa_stack.rdf.spark.model.GraphXGraphOps$$anonfun$3, <function1>)
app_1  | 	at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
app_1  | 	at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
app_1  | 	at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
app_1  | 	at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
app_1  | 	... 22 more

NTripleReader can not load file from HDFS

input: String = hdfs://namenode:8020/data/rdf.nt
<console>:120: error: type mismatch;
 found   : java.net.URI
 required: java.io.File
       val triplesRDD = NTripleReader.load(spark, JavaURI.create(input))

Sparklify can not read example rdf.nt file

sparklify_1                      | 17/05/11 13:19:36 WARN function.FunctionRegistry: Class org.aksw.sparqlify.core.RdfTerm is not a Function
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: date, date
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: integer, integer
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: float, float
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: geography, geography
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: geometry, geometry
sparklify_1                      | 17/05/11 13:19:36 WARN cast.TypeSystemImpl: Skipping: timestamp, timestamp
sparklify_1                      | 17/05/11 13:19:36 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on 172.28.0.8:34731 in memory (size: 3.0 KB, free: 366.3 MB)
sparklify_1                      | 17/05/11 13:19:36 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on 172.28.0.8:34731 in memory (size: 2.4 KB, free: 366.3 MB)
sparklify_1                      | 17/05/11 13:19:36 INFO spark.ContextCleaner: Cleaned shuffle 0
sparklify_1                      | Processing: RdfPartitionDefault(1,http://commons.dbpedia.org/property/source,2,http://www.w3.org/2001/XMLSchema#string,true)
sparklify_1                      | 17/05/11 13:19:40 INFO execution.SparkSqlParser: Parsing command: source
sparklify_1                      | Processing: RdfPartitionDefault(1,http://commons.dbpedia.org/property/otherVersions,2,http://www.w3.org/2001/XMLSchema#string,true)
sparklify_1                      | 17/05/11 13:19:40 INFO execution.SparkSqlParser: Parsing command: otherVersions
sparklify_1                      | Processing: RdfPartitionDefault(1,http://commons.dbpedia.org/property/eo,2,http://www.w3.org/2001/XMLSchema#string,true)
sparklify_1                      | 17/05/11 13:19:40 INFO execution.SparkSqlParser: Parsing command: eo
sparklify_1                      | Processing: RdfPartitionDefault(1,http://commons.dbpedia.org/property/width,2,http://dbpedia.org/datatype/perCent,false)
sparklify_1                      | Exception in thread "main" java.lang.RuntimeException: Unsupported object type: http://dbpedia.org/datatype/perCent
sparklify_1                      | 	at net.sansa_stack.rdf.partition.core.RdfPartitionerDefault$.determineLayoutDatatype(RdfPartitionerDefault.scala:103)
sparklify_1                      | 	at net.sansa_stack.rdf.partition.core.RdfPartitionerDefault$.determineLayout(RdfPartitionerDefault.scala:84)
sparklify_1                      | 	at net.sansa_stack.rdf.partition.core.RdfPartitionDefault.layout(RdfPartitionDefault.scala:12)
sparklify_1                      | 	at net.sansa_stack.query.spark.server.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:53)
sparklify_1                      | 	at net.sansa_stack.query.spark.server.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:42)
sparklify_1                      | 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
sparklify_1                      | 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
sparklify_1                      | 	at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
sparklify_1                      | 	at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
sparklify_1                      | 	at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
sparklify_1                      | 	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
sparklify_1                      | 	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
sparklify_1                      | 	at net.sansa_stack.query.spark.server.SparqlifyUtils3$.createSparqlSqlRewriter(SparqlifyUtils3.scala:42)
sparklify_1                      | 	at net.sansa_stack.examples.spark.query.Sparklify$.main(Sparklify.scala:53)
sparklify_1                      | 	at net.sansa_stack.examples.spark.query.Sparklify.main(Sparklify.scala)
sparklify_1                      | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sparklify_1                      | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sparklify_1                      | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
sparklify_1                      | 	at java.lang.reflect.Method.invoke(Method.java:498)
sparklify_1                      | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
sparklify_1                      | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
sparklify_1                      | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
sparklify_1                      | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
sparklify_1                      | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

TripleReader/TripleWriter example fails

When I have hdfs://namenode:8020/usr/hue/rdf.nt as input:

app_1  | Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 5: hdfs:
app_1  | 	at org.apache.hadoop.fs.Path.initialize(Path.java:205)
app_1  | 	at org.apache.hadoop.fs.Path.<init>(Path.java:171)
app_1  | 	at org.apache.hadoop.fs.Path.<init>(Path.java:93)
app_1  | 	at org.apache.hadoop.fs.Globber.glob(Globber.java:211)
app_1  | 	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1676)
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
app_1  | 	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
app_1  | 	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
app_1  | 	at scala.Option.getOrElse(Option.scala:121)
app_1  | 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1333)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
app_1  | 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
app_1  | 	at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleReader$.main(TripleReader.scala:42)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleReader.main(TripleReader.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
app_1  | Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 5: hdfs:
app_1  | 	at java.net.URI$Parser.fail(URI.java:2848)
app_1  | 	at java.net.URI$Parser.failExpecting(URI.java:2854)
app_1  | 	at java.net.URI$Parser.parse(URI.java:3057)
app_1  | 	at java.net.URI.<init>(URI.java:746)
app_1  | 	at org.apache.hadoop.fs.Path.initialize(Path.java:202)
app_1  | 	... 38 more
app_1  | 17/05/09 13:25:31 INFO spark.SparkContext: Invoking stop() from shutdown hook

Using local file as input get me a bit further:

app_1  | 17/05/09 13:29:53 INFO executor.Executor: Adding file:/tmp/spark-e183856c-413e-4154-a887-810cb784a3ba/userFiles-e91abb7b-a606-4f99-9219-002749c9c078/sansa-examples-spark-2016-12.jar to class loader
app_1  | 17/05/09 13:29:53 INFO rdd.HadoopRDD: Input split: file:/rdf/rdf.nt:0+8392
app_1  | 17/05/09 13:29:53 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
app_1  | 17/05/09 13:29:53 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
app_1  | 17/05/09 13:29:53 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
app_1  | 17/05/09 13:29:53 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
app_1  | 17/05/09 13:29:53 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
app_1  | 17/05/09 13:29:54 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
app_1  | java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
app_1  | 	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
app_1  | 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
app_1  | 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
app_1  | 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
app_1  | 	at scala.collection.AbstractIterator.to(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
app_1  | 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
app_1  | 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 25 more
app_1  | 17/05/09 13:29:54 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
app_1  | 	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
app_1  | 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
app_1  | 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
app_1  | 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
app_1  | 	at scala.collection.AbstractIterator.to(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
app_1  | 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
app_1  | 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 25 more
app_1  | 
app_1  | 17/05/09 13:29:54 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
app_1  | 17/05/09 13:29:54 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
app_1  | 17/05/09 13:29:54 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
app_1  | 17/05/09 13:29:54 INFO scheduler.DAGScheduler: ResultStage 0 (take at TripleReader.scala:40) failed in 3.923 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
app_1  | 	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
app_1  | 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
app_1  | 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
app_1  | 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
app_1  | 	at scala.collection.AbstractIterator.to(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
app_1  | 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
app_1  | 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 25 more
app_1  | 
app_1  | Driver stacktrace:
app_1  | 17/05/09 13:29:54 INFO scheduler.DAGScheduler: Job 0 failed: take at TripleReader.scala:40, took 3.991583 s
app_1  | Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
app_1  | 	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
app_1  | 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
app_1  | 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
app_1  | 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
app_1  | 	at scala.collection.AbstractIterator.to(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
app_1  | 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
app_1  | 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 25 more
app_1  | 
app_1  | Driver stacktrace:
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
app_1  | 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
app_1  | 	at scala.Option.foreach(Option.scala:257)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
app_1  | 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
app_1  | 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
app_1  | 	at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleReader$.main(TripleReader.scala:40)
app_1  | 	at net.sansa_stack.examples.spark.rdf.TripleReader.main(TripleReader.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
app_1  | Caused by: java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
app_1  | 	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
app_1  | 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
app_1  | 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
app_1  | 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
app_1  | 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
app_1  | 	at scala.collection.AbstractIterator.to(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
app_1  | 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
app_1  | 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
app_1  | 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1354)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 25 more

ParseException when trying to run Sparklify example

I tried to run a net.sansa_stack.examples.spark.query.Sparqklify example class with my own file in nt format and it gives back the following stack trace:

`Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input '-' expecting (line 1, pos 3)

== SQL ==
rdf-schema#subClassOf
---^^^

at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:46)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseTableIdentifier(ParseDriver.scala:48)
at org.apache.spark.sql.Dataset$$anonfun$createOrReplaceTempView$1.apply(Dataset.scala:2407)
at org.apache.spark.sql.Dataset$$anonfun$createOrReplaceTempView$1.apply(Dataset.scala:2405)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2589)
at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:2405)
at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:62)
at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$$anonfun$1.apply(SparqlifyUtils3.scala:45)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at net.sansa_stack.query.spark.sparqlify.SparqlifyUtils3$.createSparqlSqlRewriter(SparqlifyUtils3.scala:45)
at net.sansa_stack.examples.spark.query.Sparklify$.main(Sparklify.scala:53)
at net.sansa_stack.examples.spark.query.Sparklify.main(Sparklify.scala)`

MineRules example throws an Exception: java.lang.IncompatibleClassChangeError

app_1  | Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class
app_1  | 	at java.lang.ClassLoader.defineClass1(Native Method)
app_1  | 	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
app_1  | 	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
app_1  | 	at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
app_1  | 	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
app_1  | 	at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
app_1  | 	at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
app_1  | 	at java.security.AccessController.doPrivileged(Native Method)
app_1  | 	at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
app_1  | 	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
app_1  | 	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
app_1  | 	at java.lang.ClassLoader.defineClass1(Native Method)
app_1  | 	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
app_1  | 	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
app_1  | 	at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
app_1  | 	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
app_1  | 	at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
app_1  | 	at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
app_1  | 	at java.security.AccessController.doPrivileged(Native Method)
app_1  | 	at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
app_1  | 	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
app_1  | 	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
app_1  | 	at com.typesafe.scalalogging.slf4j.Logger$.apply(Logger.scala:31)
app_1  | 	at net.sansa_stack.ml.spark.mining.amieSpark.RDFGraphLoader$.<init>(RDFGraphLoader.scala:17)
app_1  | 	at net.sansa_stack.ml.spark.mining.amieSpark.RDFGraphLoader$.<clinit>(RDFGraphLoader.scala)
app_1  | 	at net.sansa_stack.examples.spark.ml.mining.MineRules$.main(MineRules.scala:54)
app_1  | 	at net.sansa_stack.examples.spark.ml.mining.MineRules.main(MineRules.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

query.Sparklify example fails (NullPointer exception)

app_1  | 17/05/09 13:49:32 INFO rdd.HadoopRDD: Input split: file:/rdf/rdf.nt:0+8392
app_1  | 17/05/09 13:49:32 INFO rdd.HadoopRDD: Input split: file:/rdf/rdf.nt:8392+8392
app_1  | 17/05/09 13:49:32 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
app_1  | 17/05/09 13:49:32 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
app_1  | 17/05/09 13:49:32 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
app_1  | 17/05/09 13:49:32 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
app_1  | 17/05/09 13:49:32 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
app_1  | 17/05/09 13:49:32 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
app_1  | java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 14 more
app_1  | 17/05/09 13:49:32 ERROR executor.Executor: Exception in task 1.0 in stage 0.0 (TID 1)
app_1  | java.lang.NoClassDefFoundError: Could not initialize class org.apache.jena.riot.RDFDataMgr
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | 17/05/09 13:49:32 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 14 more
app_1  | 
app_1  | 17/05/09 13:49:32 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
app_1  | 17/05/09 13:49:32 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
app_1  | 17/05/09 13:49:32 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost, executor driver): java.lang.NoClassDefFoundError: Could not initialize class org.apache.jena.riot.RDFDataMgr
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | 
app_1  | 17/05/09 13:49:32 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
app_1  | 17/05/09 13:49:32 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
app_1  | 17/05/09 13:49:32 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (distinct at RdfPartitionUtilsSpark.scala:22) failed in 0.834 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 14 more
app_1  | 
app_1  | Driver stacktrace:
app_1  | 17/05/09 13:49:32 INFO scheduler.DAGScheduler: Job 0 failed: collect at RdfPartitionUtilsSpark.scala:22, took 0.939503 s
app_1  | Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 14 more
app_1  | 
app_1  | Driver stacktrace:
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
app_1  | 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
app_1  | 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
app_1  | 	at scala.Option.foreach(Option.scala:257)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
app_1  | 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
app_1  | 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
app_1  | 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
app_1  | 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
app_1  | 	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
app_1  | 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
app_1  | 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
app_1  | 	at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
app_1  | 	at net.sansa_stack.rdf.spark.partition.core.RdfPartitionUtilsSpark$.partitionGraphArray(RdfPartitionUtilsSpark.scala:22)
app_1  | 	at net.sansa_stack.rdf.spark.partition.core.RdfPartitionUtilsSpark$.partitionGraph(RdfPartitionUtilsSpark.scala:17)
app_1  | 	at net.sansa_stack.examples.spark.query.Sparklify$.main(Sparklify.scala:54)
app_1  | 	at net.sansa_stack.examples.spark.query.Sparklify.main(Sparklify.scala)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
app_1  | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
app_1  | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
app_1  | 	at java.lang.reflect.Method.invoke(Method.java:498)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
app_1  | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
app_1  | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
app_1  | Caused by: java.lang.ExceptionInInitializerError
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:27)
app_1  | 	at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:26)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
app_1  | 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
app_1  | 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
app_1  | 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
app_1  | 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
app_1  | 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
app_1  | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
app_1  | 	at java.lang.Thread.run(Thread.java:745)
app_1  | Caused by: java.lang.NullPointerException
app_1  | 	at org.apache.jena.tdb.sys.EnvTDB.processGlobalSystemProperties(EnvTDB.java:33)
app_1  | 	at org.apache.jena.tdb.TDB.init(TDB.java:248)
app_1  | 	at org.apache.jena.tdb.sys.InitTDB.start(InitTDB.java:29)
app_1  | 	at org.apache.jena.system.JenaSystem.lambda$init$1(JenaSystem.java:111)
app_1  | 	at java.util.ArrayList.forEach(ArrayList.java:1249)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:186)
app_1  | 	at org.apache.jena.system.JenaSystem.forEach(JenaSystem.java:163)
app_1  | 	at org.apache.jena.system.JenaSystem.init(JenaSystem.java:109)
app_1  | 	at org.apache.jena.riot.RDFDataMgr.<clinit>(RDFDataMgr.java:81)
app_1  | 	... 14 more
app_1  | 17/05/09 13:49:32 INFO spark.SparkContext: Invoking stop() from shutdown hook

Inferencing example can not read/write to HDFS

Running this code sample results in failure:

import java.io.File
import scala.collection.mutable
import org.apache.spark.sql.SparkSession
import net.sansa_stack.rdf.spark.model.JenaSparkRDDOps
import net.sansa_stack.inference.spark.RDFGraphMaterializer
import net.sansa_stack.inference.spark.data.RDFGraphLoader
import net.sansa_stack.inference.spark.forwardchaining.ForwardRuleReasonerRDFS
import net.sansa_stack.inference.rules.ReasoningProfile
import net.sansa_stack.inference.spark.forwardchaining.ForwardRuleReasonerOWLHorst
import net.sansa_stack.inference.rules.ReasoningProfile._
import net.sansa_stack.inference.spark.data.RDFGraphWriter

// load triples from disk
val input = "hdfs://namenode:8020/data/rdf.nt"
val output = "hdfs://namenode:8020/data/output/"
val argprofile = "rdfs"
val profile = argprofile match {
  case "rdfs"      => ReasoningProfile.RDFS
  case "owl-horst" => ReasoningProfile.OWL_HORST
}

val graph = RDFGraphLoader.loadFromFile(new File(input).getAbsolutePath, sc, 4)
println(s"|G|=${graph.size()}")

// create reasoner
val reasoner = profile match {
  case RDFS      => new ForwardRuleReasonerRDFS(sc)
  case OWL_HORST => new ForwardRuleReasonerOWLHorst(sc)
}

// compute inferred graph
val inferredGraph = reasoner.apply(graph)
println(s"|G_inferred|=${inferredGraph.size()}")

// write triples to disk
RDFGraphWriter.writeGraphToFile(inferredGraph, new File(output).getAbsolutePath)

Here is the stacktrace:

import java.io.File
import scala.collection.mutable
import org.apache.spark.sql.SparkSession
import net.sansa_stack.rdf.spark.model.JenaSparkRDDOps
import net.sansa_stack.inference.spark.RDFGraphMaterializer
import net.sansa_stack.inference.spark.data.RDFGraphLoader
import net.sansa_stack.inference.spark.forwardchaining.ForwardRuleReasonerRDFS
import net.sansa_stack.inference.rules.ReasoningProfile
import net.sansa_stack.inference.spark.forwardchaining.ForwardRuleReasonerOWLHorst
import net.sansa_stack.inference.rules.ReasoningProfile._
import net.sansa_stack.inference.spark.data.RDFGraphWriter
input: String = hdfs://namenode:8020/data/rdf.nt
output: String = hdfs://namenode:8020/data/output/
argprofile: String = rdfs
profile: net.sansa_stack.inference.rules.ReasoningProfile.Value = RDFS
graph: net.sansa_stack.inference.spark.data.RDFGraph = RDFGraph(MapPartitionsRDD[25] at map at RDFGraphLoader.scala:28)
java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 5: hdfs:
  at org.apache.hadoop.fs.Path.initialize(Path.java:205)
  at org.apache.hadoop.fs.Path.<init>(Path.java:171)
  at org.apache.hadoop.fs.Path.<init>(Path.java:93)
  at org.apache.hadoop.fs.Globber.glob(Globber.java:211)
  at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1676)
  at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
  at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
  at net.sansa_stack.inference.spark.data.RDFGraph.size(RDFGraph.scala:68)
  ... 52 elided
Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 5: hdfs:
  at java.net.URI$Parser.fail(URI.java:2848)
  at java.net.URI$Parser.failExpecting(URI.java:2854)
  at java.net.URI$Parser.parse(URI.java:3057)
  at java.net.URI.<init>(URI.java:746)
  at org.apache.hadoop.fs.Path.initialize(Path.java:202)
  ... 82 more

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.