Giter Club home page Giter Club logo

sparkprojecttemplate.g8's Introduction

sparkProjectTemplate

A Giter8 template for Scala Spark Projects.

What this gives you

This template will bootstrap a new spark project with everyone's "favourite" wordcount example (modified for stop words). You can then replace the wordcount example as desired, and customize the Spark components your project needs.

To encourage good software development practice, this starts with a project at 100% code coverage (e.g. one test :p), while its expected for this to decrease, we hope you use the provided spark-testing-base library or similar option.

Creating a new project from this template

Have g8 installed? You can run it with:

g8 holdenk/sparkProjectTemplate --name=projectname --organization=com.my.org --sparkVersion=2.2.0

Using sbt (0.13.13+) just do

sbt new holdenk/sparkProjectTemplate.g8

Executing the created project

First go to the project you created:

cd projectname

You can test locally the example spark job included in this template directly from sbt:

sbt "run inputFile.txt outputFile.txt"

then choose CountingLocalApp when prompted.

You can also assemble a fat jar (see sbt-assembly for configuration details):

sbt assembly

then submit as usual to your spark cluster :

/path/to/spark-home/bin/spark-submit \
  --class <package-name>.CountingApp \
  --name the_awesome_app \
  --master <master url> \
  ./target/scala-2.11/<jar name> \
  <input file> <output file>

Related

Want to build your application using the Spark Job Server? The spark-jobserver.g8 template can help you get started too.

License

This project is available under your choice of Apache 2 or CC0 1.0. See https://www.apache.org/licenses/LICENSE-2.0 or https://creativecommons.org/publicdomain/zero/1.0/ respectively. This template is distributed without any warranty.

sparkprojecttemplate.g8's People

Contributors

aaabramov avatar abhijitsingh86 avatar ericksepulveda avatar holdenk avatar nielsreijers avatar oripwk avatar oschrenk avatar sv3ndk avatar veinhorn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparkprojecttemplate.g8's Issues

Correction in build.sbt

Hi Holden,

I had a little problem when i tried to import the genered projet in intellij.
In the file sparkProjectTemplate.g8/src/main/g8/build.sbt (libraryDependencies part) there is a parenthesis in a wrong position.

Original
libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "3.0.1" % "test",
"org.scalacheck" %% "scalacheck" % "1.13.4" % "test",
"com.holdenkarau" %% "spark-testing-base" ) <== here % "2.2.0_0.7.1" % "test",

Correction:

libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "3.0.1" % "test",
"org.scalacheck" %% "scalacheck" % "1.13.4" % "test",
"com.holdenkarau" %% "spark-testing-base" % "2.2.0_0.7.1" % "test"), <== here

Let me now if my commentary/correction is correct.
I try this and that works for me when i make an import from Intellij.

Thank you very much for your Template :)

New project doesn't build.

I just created a new project using this template and ran sbt build and got the following error:

[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	::          UNRESOLVED DEPENDENCIES         ::
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	:: org.spark-packages#sbt-spark-package;0.2.5: not found
[warn] 	:: com.jsuereth#sbt-pgp;1.0.0: not found
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] 	Note: Some unresolved dependencies have extra attributes.  Check that these dependencies exist with the requested attributes.
[warn] 		org.spark-packages:sbt-spark-package:0.2.5 (scalaVersion=2.10, sbtVersion=0.13)
[warn] 		com.jsuereth:sbt-pgp:1.0.0 (scalaVersion=2.10, sbtVersion=0.13)
[warn]
[warn] 	Note: Unresolved dependencies path:
[warn] 		org.spark-packages:sbt-spark-package:0.2.5 (scalaVersion=2.10, sbtVersion=0.13) (/Users/thomas/work/health-gen/project/plugins.sbt#L7-8)
[warn] 		  +- default:health-gen-build:0.1-SNAPSHOT (scalaVersion=2.10, sbtVersion=0.13)
[warn] 		com.jsuereth:sbt-pgp:1.0.0 (scalaVersion=2.10, sbtVersion=0.13) (/Users/thomas/work/health-gen/project/plugins.sbt#L9-10)
[warn] 		  +- default:health-gen-build:0.1-SNAPSHOT (scalaVersion=2.10, sbtVersion=0.13)
sbt.ResolveException: unresolved dependency: org.spark-packages#sbt-spark-package;0.2.5: not found
unresolved dependency: com.jsuereth#sbt-pgp;1.0.0: not found
	at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:313)
	at sbt.IvyActions$$anonfun$updateEither$1.apply(IvyActions.scala:191)
	at sbt.IvyActions$$anonfun$updateEither$1.apply(IvyActions.scala:168)
	at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:156)
	at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:156)
	at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:133)
	at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:57)
	at sbt.IvySbt$$anon$4.call(Ivy.scala:65)
	at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93)
	at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
	at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
	at xsbt.boot.Using$.withResource(Using.scala:10)
	at xsbt.boot.Using$.apply(Using.scala:9)
	at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
	at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
	at xsbt.boot.Locks$.apply0(Locks.scala:31)
	at xsbt.boot.Locks$.apply(Locks.scala:28)
	at sbt.IvySbt.withDefaultLogger(Ivy.scala:65)
	at sbt.IvySbt.withIvy(Ivy.scala:128)
	at sbt.IvySbt.withIvy(Ivy.scala:125)
	at sbt.IvySbt$Module.withModule(Ivy.scala:156)
	at sbt.IvyActions$.updateEither(IvyActions.scala:168)
	at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1488)
	at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1484)
	at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$121.apply(Defaults.scala:1519)
	at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$121.apply(Defaults.scala:1517)
	at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:37)
	at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1522)
	at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1516)
	at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:60)
	at sbt.Classpaths$.cachedUpdate(Defaults.scala:1539)
	at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1466)
	at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1418)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[error] (*:update) sbt.ResolveException: unresolved dependency: org.spark-packages#sbt-spark-package;0.2.5: not found
[error] unresolved dependency: com.jsuereth#sbt-pgp;1.0.0: not found

Looks like it's using scala 2.10 to resolve dependencies? Even though scala version is set to 2.11.

-XX:MaxPermSize=size produces an error in JDK 17

From https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html#removed-java-options:

"-XX:MaxPermSize=size
Sets the maximum permanent generation space size (in bytes). This option was deprecated in JDK 8 and superseded by the -XX:MaxMetaspaceSize option."

The javaOptions in the build.sbt generated by this template sets XX:MaxPermSize to 2048M, which causes an "[error] Unrecognized VM option 'MaxPermSize=256m'" when running the test project in JDK 17.

Since I'm relatively new to Scala and Spark, I'm unsure if the option should simply be replaced by MaxMetaspaceSize or if it's better to remove it altogether.

Update the testing suite from FunSuite to AnyFunSuite

Hello,

The actual implementation of the template for the testing class has a not longer available testing suite, causing the generated project from the template to crush during compilation.

In the actual version of the template, this is the code for the test:

package $organization$.$name$

/**
 * A simple test for everyone's favourite wordcount example.
 */

import com.holdenkarau.spark.testing.SharedSparkContext
import org.scalatest.FunSuite

class WordCountTest extends FunSuite with SharedSparkContext {

Could it be updated to this?:

package $organization$.$name$

/**
 * A simple test for everyone's favourite wordcount example.
 */

import com.holdenkarau.spark.testing.SharedSparkContext
import org.scalatest.funsuite.AnyFunSuite

class WordCountTest extends AnyFunSuite with SharedSparkContext {

Thank you.

problem with bintray resolvers

I have created a project with the template but there were problems with the resolutions of the dependencies

image

It was necessary to modify the project / plugins.sbt

resolvers += "Spark Package Main Repo" at "https://dl.bintray.com/spark-packages/maven"

with this

resolvers += "Spark Package Main Repo" at "https://repos.spark-packages.org/"

Possibly related to https://spark.apache.org/news/new-repository-service.html

Can't run the template project

I follow the instructions on https://github.com/holdenk/sparkProjectTemplate.g8, but got the following errors when running it.

Do I need to prepare input.txt? What shall I write in it?

Thanks.

$ sbt "run inputFile.txt outputFile.txt"
[info] Loading project definition from /home/t/Spark/example/templateproject/sparkproject/project
[info] Set current project to sparkProject (in build file:/home/t/Spark/example/templateproject/sparkproject/)
[info] Compiling 2 Scala sources to /home/t/Spark/example/templateproject/sparkproject/target/scala-2.11/classes...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list

Multiple main classes detected, select one to run:

 [1] com.example.sparkProject.CountingApp
 [2] com.example.sparkProject.CountingLocalApp

Enter number: 2

[info] Running com.example.sparkProject.CountingLocalApp inputFile.txt outputFile.txt
[error] OpenJDK 64-Bit Server VM warning: Ignoring option MaxPermSize; support was removed in 8.0
[error] Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[error] 20/03/18 14:05:24 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
[error] 20/03/18 14:05:24 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
[error] 20/03/18 14:05:24 INFO SparkContext: Running Spark version 2.3.0
[error] WARNING: An illegal reflective access operation has occurred
[error] WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/t/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.6.5.jar) to method sun.security.krb5.Config.getInstance()
[error] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
[error] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[error] WARNING: All illegal access operations will be denied in a future release
[error] 20/03/18 14:05:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[error] 20/03/18 14:05:25 INFO SparkContext: Submitted application: my awesome app
[error] 20/03/18 14:05:26 INFO SecurityManager: Changing view acls to: t
[error] 20/03/18 14:05:26 INFO SecurityManager: Changing modify acls to: t
[error] 20/03/18 14:05:26 INFO SecurityManager: Changing view acls groups to: 
[error] 20/03/18 14:05:26 INFO SecurityManager: Changing modify acls groups to: 
[error] 20/03/18 14:05:26 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(t); groups with view permissions: Set(); users  with modify permissions: Set(t); groups with modify permissions: Set()
[error] 20/03/18 14:05:26 INFO Utils: Successfully started service 'sparkDriver' on port 44727.
[error] 20/03/18 14:05:26 INFO SparkEnv: Registering MapOutputTracker
[error] 20/03/18 14:05:26 INFO SparkEnv: Registering BlockManagerMaster
[error] 20/03/18 14:05:26 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
[error] 20/03/18 14:05:26 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
[error] 20/03/18 14:05:26 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d6445547-75ed-4dd3-a0b9-0cf99d75e01e
[error] 20/03/18 14:05:26 INFO MemoryStore: MemoryStore started with capacity 1048.8 MB
[error] 20/03/18 14:05:26 INFO SparkEnv: Registering OutputCommitCoordinator
[error] 20/03/18 14:05:27 INFO Utils: Successfully started service 'SparkUI' on port 4040.
[error] 20/03/18 14:05:27 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
[error] 20/03/18 14:05:27 INFO Executor: Starting executor ID driver on host localhost
[error] 20/03/18 14:05:27 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36907.
[error] 20/03/18 14:05:27 INFO NettyBlockTransferService: Server created on 192.168.122.1:36907
[error] 20/03/18 14:05:27 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
[error] 20/03/18 14:05:27 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 36907, None)
[error] 20/03/18 14:05:27 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:36907 with 1048.8 MB RAM, BlockManagerId(driver, 192.168.122.1, 36907, None)
[error] 20/03/18 14:05:27 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 36907, None)
[error] 20/03/18 14:05:27 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 36907, None)
[error] 20/03/18 14:05:29 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.1 KB, free 1048.7 MB)
[error] 20/03/18 14:05:29 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 1048.7 MB)
[error] 20/03/18 14:05:29 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.122.1:36907 (size: 20.4 KB, free: 1048.8 MB)
[error] 20/03/18 14:05:29 INFO SparkContext: Created broadcast 0 from textFile at CountingApp.scala:32
[error] Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/t/Spark/example/templateproject/sparkproject/inputFile.txt
[error] 	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
[error] 	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
[error] 	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
[error] 	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
[error] 	at scala.Option.getOrElse(Option.scala:121)
[error] 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
[error] 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
[error] 	at scala.Option.getOrElse(Option.scala:121)
[error] 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
[error] 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
[error] 	at scala.Option.getOrElse(Option.scala:121)
[error] 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
[error] 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
[error] 	at scala.Option.getOrElse(Option.scala:121)
[error] 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
[error] 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
[error] 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
[error] 	at scala.Option.getOrElse(Option.scala:121)
[error] 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
[error] 	at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:75)
[error] 	at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:75)
[error] 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
[error] 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
[error] 	at scala.collection.immutable.List.foreach(List.scala:381)
[error] 	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
[error] 	at scala.collection.immutable.List.map(List.scala:285)
[error] 	at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:75)
[error] 	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
[error] 	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
[error] 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[error] 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
[error] 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
[error] 	at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:325)
[error] 	at com.example.sparkProject.WordCount$.withStopWordsFiltered(WordCount.scala:25)
[error] 	at com.example.sparkProject.Runner$.run(CountingApp.scala:33)
[error] 	at com.example.sparkProject.CountingLocalApp$.delayedEndpoint$com$example$sparkProject$CountingLocalApp$1(CountingApp.scala:16)
[error] 	at com.example.sparkProject.CountingLocalApp$delayedInit$body.apply(CountingApp.scala:10)
[error] 	at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
[error] 	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
[error] 	at scala.App$$anonfun$main$1.apply(App.scala:76)
[error] 	at scala.App$$anonfun$main$1.apply(App.scala:76)
[error] 	at scala.collection.immutable.List.foreach(List.scala:381)
[error] 	at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
[error] 	at scala.App$class.main(App.scala:76)
[error] 	at com.example.sparkProject.CountingLocalApp$.main(CountingApp.scala:10)
[error] 	at com.example.sparkProject.CountingLocalApp.main(CountingApp.scala)
[error] 20/03/18 14:05:30 INFO SparkContext: Invoking stop() from shutdown hook
[error] 20/03/18 14:05:30 INFO SparkUI: Stopped Spark web UI at http://192.168.122.1:4040
[error] 20/03/18 14:05:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
[error] 20/03/18 14:05:30 INFO MemoryStore: MemoryStore cleared
[error] 20/03/18 14:05:30 INFO BlockManager: BlockManager stopped
[error] 20/03/18 14:05:30 INFO BlockManagerMaster: BlockManagerMaster stopped
[error] 20/03/18 14:05:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
[error] 20/03/18 14:05:30 INFO SparkContext: Successfully stopped SparkContext
[error] 20/03/18 14:05:30 INFO ShutdownHookManager: Shutdown hook called
[error] 20/03/18 14:05:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-edcae93e-feba-4204-91ca-8aaa4519e06a
java.lang.RuntimeException: Nonzero exit code returned from runner: 1
	at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code returned from runner: 1
[error] Total time: 72 s, completed Mar 18, 2020, 2:05:30 PM

Error on readme

sbt new holdenk/sparkProjectTemplate

should be

sbt new holdenk/sparkProjectTemplate.g8

Else you get:
Template not found for: holdenk/sparkProjectTemplate

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.