Giter Club home page Giter Club logo

spark-sql-on-hbase's People

Contributors

bomeng avatar jackylk avatar scwf avatar xinyunh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-sql-on-hbase's Issues

BytesUtils 中 toDouble 和 toLong 错误

当列中有数据类型定义为 double 或 long 时, 使用:

select * from t limit 3;

报错:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 7, localhost): java.lang.IllegalArgumentException: offset (70) + length (8) exceed the capacity of the array: 71
        at org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:600)
        at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:578)
        at org.apache.spark.sql.hbase.util.BytesUtils$.toDouble(BytesUtils.scala:52)
        at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:92)
        at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:885)
        at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:969)

或者

15/07/30 17:35:34 ERROR Executor: Exception in task 0.0 in stage 69.0 (TID 79)
java.lang.ArrayIndexOutOfBoundsException: 71
        at org.apache.spark.sql.hbase.util.BytesUtils$$anonfun$toLong$1.apply$mcVI$sp(BytesUtils.scala:85)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
        at org.apache.spark.sql.hbase.util.BytesUtils$.toLong(BytesUtils.scala:84)
        at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:95)
        at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:885)
        at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:969)
        at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:965)

这是一个 bug ?

MR fail cause by export SPARK_CLASSPATH

Hi, this is cool stuff for spark sql with HBase, however I've some issue or problem as follow:

I've installed your product follow by document and it's all work well currently. But I write a very simple Spark application for query HBase table using newAPIHadoopRDD but got these error:

Application application_1439169262151_0037 failed 2 times due to AM Container for appattempt_1439169262151_0037_000002 exited with  exitCode: 127 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:114)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
    at org.apache.spark.SparkContext.(SparkContext.scala:497)
    at com.hbase.HBaseQueryWithRDD$.main(HBaseQueryWithRDD.scala:18)
    at com.hbase.HBaseQueryWithRDD.main(HBaseQueryWithRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2015-08-12 10:15:53,360 INFO  [main] scheduler.DAGScheduler (Logging.scala:logInfo(59)) - Stopping DAGScheduler
2015-08-12 10:15:53,362 ERROR [main] spark.SparkContext (Logging.scala:logError(96)) - Error stopping SparkContext after init error.
java.lang.NullPointerException
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:150)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1404)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1642)
    at org.apache.spark.SparkContext.(SparkContext.scala:565)
    at com.hbase.HBaseQueryWithRDD$.main(HBaseQueryWithRDD.scala:18)
    at com.hbase.HBaseQueryWithRDD.main(HBaseQueryWithRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:114)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
    at org.apache.spark.SparkContext.(SparkContext.scala:497)
    at com.hbase.HBaseQueryWithRDD$.main(HBaseQueryWithRDD.scala:18)
    at com.hbase.HBaseQueryWithRDD.main(HBaseQueryWithRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

But If I remove the spark-sql-on-hbase-1.0.0.jar from SPARK_CLASSPATH, the job will pass.

My spark version is 1.4.0 and Hadoop is 2.3

Some of the codes that I can't understand

When I read the part of HBaseSQlParser,the follow code that I can't understand. what the meaning of these?what the use of these?

protected lazy val insertValues: Parser[LogicalPlan] =
INSERT ~> INTO ~> TABLE ~> ident ~ (VALUES ~> "(" > values < ")") ^^ {
case tableName ~ valueSeq =>
val valueStringSeq = valueSeq.map { case v =>
if (v.value == null) null
else v.value.toString
}
InsertValueIntoTableCommand(tableName, valueStringSeq)
}

protected lazy val create: Parser[LogicalPlan] =
CREATE ~> TABLE ~> ident ~
("(" > tableCols < ",") ~
(PRIMARY ~> KEY > "(" > keys < ")" < ")") ~
(MAPPED ~> BY ~> "(" > opt(nameSpace)) ~
(ident <
",") ~
(COLS ~> "=" > "[" > expressions < "]" < ")") ~
(IN > ident).? < opt(";") ^^ {

"SparkSQLOnHBase" support for secure HBase

In the documentation,"SparkSQLOnHBase_v2.2.docx", under limitation section it is mentioned that
"No secure HBase support is in schedule either."

So does it mean that "Spark-SQL-on-Hbase" will not work on Hbase which is kerberized(Secured).Please confirm.

coprocessor CheckDirService not found

每次进入 hbase-sql 后, 执行的第一条 sql 查询, 会 hang 10 分钟左右, 然后输出:

15/07/30 17:23:20 WARN CoprocessorRpcChannel: Call failed on IOException
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Thu Jul 30 17:14:07 CST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@6d9c7e9b, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException): org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name CheckDirService in region metadata,,1438221371472.33a2b7cbaab1f126dbff444b6c11e4da.
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5884)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3464)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3446)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30950)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

CheckDirService 在源码中有定义 proto, 但是没有单独的 jar 生成.

难道需要将 spark-sql-on-hbase-1.0.0.jar 上传到 hbase 的 master 和每个 regionserver ? 这个 jar 文件很大, 如果是这样的话, 有点不方便.

Thanks.

toInt Error

when I create table and assign one column is Int. But when I execute sql , it will faild.
The error information:
astro> select * from test3;
15/09/29 01:09:50 INFO HBaseSQLCliDriver: Processing select * from test3
15/09/29 01:09:50 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
15/09/29 01:09:50 INFO ZooKeeper: Client environment:host.name=localhost
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.version=1.7.0_25
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.home=/cloudera/jdk1.7/jre
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.class.path=/cloudera/spark/lib/datanucleus-api-jdo-3.2.6.jar:/cloudera/spark/lib/hbase-it-0.98.6-cdh5.3.1-tests.jar:/cloudera/spark/lib/datanucleus-core-3.2.10.jar:/cloudera/spark/lib/spark-sql-on-hbase-1.0.0.jar:/cloudera/spark/lib/original-spark-sql-on-hbase-1.0.0.jar:/cloudera/spark/lib/spark-examples-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/lib/hbase-prefix-tree-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-server-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-testing-util-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-shell-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-thrift-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-server-0.98.6-cdh5.3.1-tests.jar:/cloudera/spark/lib/datanucleus-rdbms-3.2.9.jar:/cloudera/spark/lib/hbase-protocol-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/protobuf-java-2.5.0.jar:/cloudera/spark/lib/spark-assembly-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/conf/:/cloudera/spark/lib/spark-assembly-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/lib/datanucleus-api-jdo-3.2.6.jar:/cloudera/spark/lib/datanucleus-core-3.2.10.jar:/cloudera/spark/lib/datanucleus-rdbms-3.2.9.jar
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.compiler=
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.name=Linux
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.arch=amd64
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_64
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.name=master
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.home=/home/master
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.dir=/cloudera/spark-hbase/bin
15/09/29 01:09:50 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:50 INFO RecoverableZooKeeper: Process identifier=hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:50 INFO ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:50 INFO ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
15/09/29 01:09:50 INFO ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15014b5f5040037, negotiated timeout = 90000
15/09/29 01:09:51 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:51 INFO RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:51 INFO ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:51 INFO ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
15/09/29 01:09:51 INFO ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15014b5f5040038, negotiated timeout = 90000
15/09/29 01:09:51 INFO deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
15/09/29 01:09:51 INFO ZooKeeper: Session: 0x15014b5f5040038 closed
15/09/29 01:09:51 INFO ClientCnxn: EventThread shut down
15/09/29 01:09:52 INFO HBaseRelation: Number of HBase regions for table test: 1
15/09/29 01:09:52 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:52 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:52 INFO ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/09/29 01:09:52 INFO RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:52 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15014b5f5040039, negotiated timeout = 90000
15/09/29 01:09:52 INFO ZooKeeper: Session: 0x15014b5f5040039 closed
15/09/29 01:09:52 INFO ClientCnxn: EventThread shut down
15/09/29 01:09:52 INFO SparkContext: Starting job: main at NativeMethodAccessorImpl.java:-2
15/09/29 01:09:52 INFO DAGScheduler: Got job 0 (main at NativeMethodAccessorImpl.java:-2) with 1 output partitions (allowLocal=false)
15/09/29 01:09:52 INFO DAGScheduler: Final stage: ResultStage 0(main at NativeMethodAccessorImpl.java:-2)
15/09/29 01:09:52 INFO DAGScheduler: Parents of final stage: List()
15/09/29 01:09:52 INFO DAGScheduler: Missing parents: List()
15/09/29 01:09:52 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at main at NativeMethodAccessorImpl.java:-2), which has no missing parents
15/09/29 01:09:52 INFO MemoryStore: ensureFreeSpace(14784) called with curMem=0, maxMem=277842493
15/09/29 01:09:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 14.4 KB, free 265.0 MB)
15/09/29 01:09:52 INFO MemoryStore: ensureFreeSpace(13323) called with curMem=14784, maxMem=277842493
15/09/29 01:09:52 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.0 KB, free 264.9 MB)
15/09/29 01:09:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:44603 (size: 13.0 KB, free: 265.0 MB)
15/09/29 01:09:52 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/09/29 01:09:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at main at NativeMethodAccessorImpl.java:-2)
15/09/29 01:09:52 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/09/29 01:09:52 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, NODE_LOCAL, 1688 bytes)
15/09/29 01:09:52 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/09/29 01:09:52 INFO Executor: Fetching http://127.0.0.1:41302/jars/spark-sql-on-hbase-1.0.0.jar with timestamp 1443460180930
15/09/29 01:09:52 INFO Utils: Fetching http://127.0.0.1:41302/jars/spark-sql-on-hbase-1.0.0.jar to /tmp/spark-7ce18148-9a8f-4dda-9910-4de00e33c41f/userFiles-adc950d7-749a-4fd7-b542-09871a7e0686/fetchFileTemp1293095241738742836.tmp
15/09/29 01:09:53 INFO Executor: Adding file:/tmp/spark-7ce18148-9a8f-4dda-9910-4de00e33c41f/userFiles-adc950d7-749a-4fd7-b542-09871a7e0686/spark-sql-on-hbase-1.0.0.jar to class loader
15/09/29 01:09:53 INFO HBasePartition: None
15/09/29 01:09:53 INFO deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
15/09/29 01:09:53 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
15/09/29 01:09:53 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

15/09/29 01:09:53 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
15/09/29 01:09:53 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/09/29 01:09:53 INFO TaskSchedulerImpl: Cancelling stage 0
15/09/29 01:09:53 INFO DAGScheduler: ResultStage 0 (main at NativeMethodAccessorImpl.java:-2) failed in 0.434 s
15/09/29 01:09:53 INFO DAGScheduler: Job 0 failed: main at NativeMethodAccessorImpl.java:-2, took 0.603352 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

toInt Error

when I create table and assign one column is Int. But when I execute sql , it will faild.
The error information:
astro> select * from test3;
15/09/29 01:09:50 INFO HBaseSQLCliDriver: Processing select * from test3
15/09/29 01:09:50 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
15/09/29 01:09:50 INFO ZooKeeper: Client environment:host.name=localhost
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.version=1.7.0_25
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.home=/cloudera/jdk1.7/jre
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.class.path=/cloudera/spark/lib/datanucleus-api-jdo-3.2.6.jar:/cloudera/spark/lib/hbase-it-0.98.6-cdh5.3.1-tests.jar:/cloudera/spark/lib/datanucleus-core-3.2.10.jar:/cloudera/spark/lib/spark-sql-on-hbase-1.0.0.jar:/cloudera/spark/lib/original-spark-sql-on-hbase-1.0.0.jar:/cloudera/spark/lib/spark-examples-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/lib/hbase-prefix-tree-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-server-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-testing-util-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-shell-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-thrift-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/hbase-server-0.98.6-cdh5.3.1-tests.jar:/cloudera/spark/lib/datanucleus-rdbms-3.2.9.jar:/cloudera/spark/lib/hbase-protocol-0.98.6-cdh5.3.1.jar:/cloudera/spark/lib/protobuf-java-2.5.0.jar:/cloudera/spark/lib/spark-assembly-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/conf/:/cloudera/spark/lib/spark-assembly-1.4.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/cloudera/spark/lib/datanucleus-api-jdo-3.2.6.jar:/cloudera/spark/lib/datanucleus-core-3.2.10.jar:/cloudera/spark/lib/datanucleus-rdbms-3.2.9.jar
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/09/29 01:09:50 INFO ZooKeeper: Client environment:java.compiler=
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.name=Linux
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.arch=amd64
15/09/29 01:09:50 INFO ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_64
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.name=master
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.home=/home/master
15/09/29 01:09:50 INFO ZooKeeper: Client environment:user.dir=/cloudera/spark-hbase/bin
15/09/29 01:09:50 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:50 INFO RecoverableZooKeeper: Process identifier=hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:50 INFO ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:50 INFO ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
15/09/29 01:09:50 INFO ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15014b5f5040037, negotiated timeout = 90000
15/09/29 01:09:51 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:51 INFO RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:51 INFO ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:51 INFO ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
15/09/29 01:09:51 INFO ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15014b5f5040038, negotiated timeout = 90000
15/09/29 01:09:51 INFO deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
15/09/29 01:09:51 INFO ZooKeeper: Session: 0x15014b5f5040038 closed
15/09/29 01:09:51 INFO ClientCnxn: EventThread shut down
15/09/29 01:09:52 INFO HBaseRelation: Number of HBase regions for table test: 1
15/09/29 01:09:52 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x526ebdba, quorum=localhost:2181, baseZNode=/hbase
15/09/29 01:09:52 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/09/29 01:09:52 INFO ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/09/29 01:09:52 INFO RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x526ebdba connecting to ZooKeeper ensemble=localhost:2181
15/09/29 01:09:52 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15014b5f5040039, negotiated timeout = 90000
15/09/29 01:09:52 INFO ZooKeeper: Session: 0x15014b5f5040039 closed
15/09/29 01:09:52 INFO ClientCnxn: EventThread shut down
15/09/29 01:09:52 INFO SparkContext: Starting job: main at NativeMethodAccessorImpl.java:-2
15/09/29 01:09:52 INFO DAGScheduler: Got job 0 (main at NativeMethodAccessorImpl.java:-2) with 1 output partitions (allowLocal=false)
15/09/29 01:09:52 INFO DAGScheduler: Final stage: ResultStage 0(main at NativeMethodAccessorImpl.java:-2)
15/09/29 01:09:52 INFO DAGScheduler: Parents of final stage: List()
15/09/29 01:09:52 INFO DAGScheduler: Missing parents: List()
15/09/29 01:09:52 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at main at NativeMethodAccessorImpl.java:-2), which has no missing parents
15/09/29 01:09:52 INFO MemoryStore: ensureFreeSpace(14784) called with curMem=0, maxMem=277842493
15/09/29 01:09:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 14.4 KB, free 265.0 MB)
15/09/29 01:09:52 INFO MemoryStore: ensureFreeSpace(13323) called with curMem=14784, maxMem=277842493
15/09/29 01:09:52 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.0 KB, free 264.9 MB)
15/09/29 01:09:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:44603 (size: 13.0 KB, free: 265.0 MB)
15/09/29 01:09:52 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/09/29 01:09:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at main at NativeMethodAccessorImpl.java:-2)
15/09/29 01:09:52 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/09/29 01:09:52 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, NODE_LOCAL, 1688 bytes)
15/09/29 01:09:52 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/09/29 01:09:52 INFO Executor: Fetching http://127.0.0.1:41302/jars/spark-sql-on-hbase-1.0.0.jar with timestamp 1443460180930
15/09/29 01:09:52 INFO Utils: Fetching http://127.0.0.1:41302/jars/spark-sql-on-hbase-1.0.0.jar to /tmp/spark-7ce18148-9a8f-4dda-9910-4de00e33c41f/userFiles-adc950d7-749a-4fd7-b542-09871a7e0686/fetchFileTemp1293095241738742836.tmp
15/09/29 01:09:53 INFO Executor: Adding file:/tmp/spark-7ce18148-9a8f-4dda-9910-4de00e33c41f/userFiles-adc950d7-749a-4fd7-b542-09871a7e0686/spark-sql-on-hbase-1.0.0.jar to class loader
15/09/29 01:09:53 INFO HBasePartition: None
15/09/29 01:09:53 INFO deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
15/09/29 01:09:53 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
15/09/29 01:09:53 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

15/09/29 01:09:53 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
15/09/29 01:09:53 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/09/29 01:09:53 INFO TaskSchedulerImpl: Cancelling stage 0
15/09/29 01:09:53 INFO DAGScheduler: ResultStage 0 (main at NativeMethodAccessorImpl.java:-2) failed in 0.434 s
15/09/29 01:09:53 INFO DAGScheduler: Job 0 failed: main at NativeMethodAccessorImpl.java:-2, took 0.603352 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 26
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation.org$apache$spark$sql$hbase$HBaseRelation$$setColumn(HBaseRelation.scala:892)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:976)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

something about HBasesqlParser how to realize

When I read the part of HBaseSQlCLIDriver,I didn't understand something about HBasesqlParser how to realize.

//只知道这一块的功能是获取SQL语句关键字解析的操作如何解析部分理解的不是很明白

def getCompletors: Seq[Completor] = {
    val sc: SimpleCompletor = new SimpleCompletor(new Array[String](0))
    // add keywords, including lower-cased versions
    HBaseSQLParser.getKeywords.foreach { kw =>  //遍历每个SQL关键字进行了什么操作?
      sc.addCandidateString(kw)
      sc.addCandidateString(kw.toLowerCase)
    }

    Seq(sc)
  }

Issue while running Spark-sql on Hbase

I am working on Spark-sql on HBase :
I am working on Horton works 2.3 VM which supports spark 1.3.1 so I externally downloaded spark 1.4.0.

Hbase version is :Version 1.1.0.2.3.0.0-2130

I installed Spark SQL on Hbase based on instruction specified in :
https://github.com/Huawei-Spark/Spark-SQL-on-HBase

--More--[root@sandbox bin]# ./hbase-sql
15/10/07 06:15:47 INFO spark.SparkContext: Running Spark version 1.4.0
15/10/07 06:15:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/10/07 06:15:48 INFO spark.SecurityManager: Changing view acls to: root
15/10/07 06:15:48 INFO spark.SecurityManager: Changing modify acls to: root
15/10/07 06:15:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/10/07 06:15:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/10/07 06:15:49 INFO Remoting: Starting remoting
15/10/07 06:15:49 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:49136]
15/10/07 06:15:49 INFO util.Utils: Successfully started service 'sparkDriver' on port 49136.
15/10/07 06:15:50 INFO spark.SparkEnv: Registering MapOutputTracker
15/10/07 06:15:50 INFO spark.SparkEnv: Registering BlockManagerMaster
15/10/07 06:15:50 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-3a74e54e-caba-40c5-90a9-918be1e9ad99/blockmgr-2e0275db-ea0e-4c36-a586-ff866f575271
15/10/07 06:15:50 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
15/10/07 06:15:50 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-3a74e54e-caba-40c5-90a9-918be1e9ad99/httpd-b9ae55de-0e91-4488-9127-20ec61d563eb
15/10/07 06:15:50 INFO spark.HttpServer: Starting HTTP Server
15/10/07 06:15:50 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/07 06:15:50 INFO server.AbstractConnector: Started [email protected]:36460
15/10/07 06:15:50 INFO util.Utils: Successfully started service 'HTTP file server' on port 36460.
15/10/07 06:15:50 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/10/07 06:15:50 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/07 06:15:50 INFO server.AbstractConnector: Started [email protected]:4040
15/10/07 06:15:50 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/10/07 06:15:50 INFO ui.SparkUI: Started SparkUI at http://10.0.2.15:4040
15/10/07 06:15:50 INFO spark.SparkContext: Added JAR file:/home/rk/spark-hbase/spark-hbase/target/spark-sql-on-hbase-1.0.0.jar at http://10.0.2.15:36460/jars/spark-sql-on-hbase-1.0.0.jar with timestamp 1444198550958
15/10/07 06:15:51 INFO executor.Executor: Starting executor ID driver on host localhost
15/10/07 06:15:51 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 48672.
15/10/07 06:15:51 INFO netty.NettyBlockTransferService: Server created on 48672
15/10/07 06:15:51 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/10/07 06:15:51 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:48672 with 265.4 MB RAM, BlockManagerId(driver, localhost, 48672)
15/10/07 06:15:51 INFO storage.BlockManagerMaster: Registered BlockManager
Welcome to hbaseql CLI
astro> select * from emp;
15/10/07 06:16:12 INFO hbase.HBaseSQLCliDriver: Processing select * from emp
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:host.name=sandbox.hortonworks.com
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/rk/spark/spark-1.4.0-bin-hadoop2.4/conf/:/home/rk/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar:/home/rk/spark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/home/rk/spark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/home/rk/spark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/hdp/2.3.0.0-2130/hadoop/conf/
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.16.2.el6.x86_64
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:user.name=root
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/rk/spark-hbase/spark-hbase/bin
15/10/07 06:16:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x977faf, quorum=localhost:2181, baseZNode=/hbase
15/10/07 06:16:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x977faf connecting to ZooKeeper ensemble=localhost:2181
15/10/07 06:16:13 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/10/07 06:16:13 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/10/07 06:16:13 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15040d0f571001a, negotiated timeout = 40000
15/10/07 06:16:13 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
15/10/07 06:16:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x977faf, quorum=localhost:2181, baseZNode=/hbase
15/10/07 06:16:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x977faf connecting to ZooKeeper ensemble=localhost:2181
15/10/07 06:16:14 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/10/07 06:16:14 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/10/07 06:16:14 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15040d0f571001b, negotiated timeout = 40000

My system doesn't respond

Configuration parameter

The configuration parameter "HBASE_CLSSPATH" in hbase-env.sh should be “HBASE_CLASSPATH”,right?

Missing class diagram in SparkSQLOnHBase_v2.2.docx

I was reading the SparkSQLOnHBase_v2.2.docx in doc folder. I think it is well documented and helps me a lot in understanding the implementation. But the content of class diagram in chapter 17.6 is incomplete, there are only some yellow and white blocks with no class name in it (like shown bellow). Are there any methods for me to see the full class extension relationships? Hope you can fix it. Thanks again for your great work.
image

Error on executing 'Select * from tablename'

I am getting error index out of bound when i execute 'select * from table'
Please find below the details :

Hbase Table:
describe 'sales'
Table sales is ENABLED
sales
COLUMN FAMILIES DESCRIPTION
{NAME => 'sales_des', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL =>
'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
1 row(s) in 0.1240 seconds

scan 'sales'
ROW COLUMN+CELL
0 column=sales_des:product, timestamp=1444305686288, value=pr0
0 column=sales_des:quantity, timestamp=1444311988162, value=0
0 column=sales_des:region, timestamp=1444305702221, value=reg0
0 column=sales_des:sales, timestamp=1444312378336, value=0
0 column=sales_des:tranid, timestamp=1444302264948, value=0
1 row(s) in 0.4380 seconds

Hbase Spark Sql :
CREATE TABLE sales(tranid INTEGER, product STRING, region STRING, sales INTEGER, quantity INTEGER, PRIMARY KEY (tranid)) MAPPED BY (sales, COLS=[product=sales_des.product, region=sales_des.region, sales=sales_des.sales, quantity=sales_des.quantity]);

Error :
select * from sales;
15/10/08 15:15:35 INFO hbase.HBaseSQLCliDriver: Processing select * from sales
15/10/08 15:15:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sandbox.hortonworks.com:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x5a713416, quorum=sandbox.hortonworks.com:2181, baseZNode=/hbase-unsecure
15/10/08 15:15:35 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x5a713416 connecting to ZooKeeper ensemble=sandbox.hortonworks.com:2181
15/10/08 15:15:35 INFO zookeeper.ClientCnxn: Opening socket connection to server sandbox.hortonworks.com/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
15/10/08 15:15:35 INFO zookeeper.ClientCnxn: Socket connection established to sandbox.hortonworks.com/10.0.2.15:2181, initiating session
15/10/08 15:15:35 INFO zookeeper.ClientCnxn: Session establishment complete on server sandbox.hortonworks.com/10.0.2.15:2181, sessionid = 0x15046c4e1230031, negotiated timeout = 40000
15/10/08 15:15:35 INFO zookeeper.ZooKeeper: Session: 0x15046c4e1230031 closed
15/10/08 15:15:35 INFO zookeeper.ClientCnxn: EventThread shut down
15/10/08 15:15:35 INFO hbase.HBaseRelation: Number of HBase regions for table sales: 1
15/10/08 15:15:35 INFO spark.SparkContext: Starting job: main at NativeMethodAccessorImpl.java:-2
15/10/08 15:15:35 INFO scheduler.DAGScheduler: Got job 6 (main at NativeMethodAccessorImpl.java:-2) with 1 output partitions (allowLocal=false)
15/10/08 15:15:35 INFO scheduler.DAGScheduler: Final stage: ResultStage 6(main at NativeMethodAccessorImpl.java:-2)
15/10/08 15:15:35 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/10/08 15:15:35 INFO scheduler.DAGScheduler: Missing parents: List()
15/10/08 15:15:35 INFO scheduler.DAGScheduler: Submitting ResultStage 6 (MapPartitionsRDD[13] at main at NativeMethodAccessorImpl.java:-2), which has no missing parents
15/10/08 15:15:35 INFO storage.MemoryStore: ensureFreeSpace(18176) called with curMem=2931, maxMem=278302556
15/10/08 15:15:35 INFO storage.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 17.8 KB, free 265.4 MB)
15/10/08 15:15:35 INFO storage.MemoryStore: ensureFreeSpace(16520) called with curMem=21107, maxMem=278302556
15/10/08 15:15:36 INFO storage.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 16.1 KB, free 265.4 MB)
15/10/08 15:15:36 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:60580 (size: 16.1 KB, free: 265.4 MB)
15/10/08 15:15:36 INFO spark.SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:874
15/10/08 15:15:36 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 6 (MapPartitionsRDD[13] at main at NativeMethodAccessorImpl.java:-2)
15/10/08 15:15:36 INFO scheduler.TaskSchedulerImpl: Adding task set 6.0 with 1 tasks
15/10/08 15:15:36 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 6.0 (TID 6, localhost, ANY, 1702 bytes)
15/10/08 15:15:36 INFO executor.Executor: Running task 0.0 in stage 6.0 (TID 6)
15/10/08 15:15:36 INFO hbase.HBasePartition: None
15/10/08 15:15:36 ERROR executor.Executor: Exception in task 0.0 in stage 6.0 (TID 6)
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:979)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/10/08 15:15:36 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 6.0 (TID 6, localhost): java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:979)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

15/10/08 15:15:36 ERROR scheduler.TaskSetManager: Task 0 in stage 6.0 failed 1 times; aborting job
15/10/08 15:15:36 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool
15/10/08 15:15:36 INFO scheduler.TaskSchedulerImpl: Cancelling stage 6
15/10/08 15:15:36 INFO scheduler.DAGScheduler: ResultStage 6 (main at NativeMethodAccessorImpl.java:-2) failed in 0.233 s
15/10/08 15:15:36 INFO scheduler.DAGScheduler: Job 6 failed: main at NativeMethodAccessorImpl.java:-2, took 0.279367 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 6, localhost): java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$$anonfun$toInt$1.apply$mcVI$sp(bytesUtils.scala:156)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.sql.hbase.util.BinaryBytesUtils$.toInt(bytesUtils.scala:155)
at org.apache.spark.sql.hbase.util.DataTypeUtils$.setRowColumnFromHBaseRawType(DataTypeUtils.scala:97)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:979)
at org.apache.spark.sql.hbase.HBaseRelation$$anonfun$buildRow$1.apply(HBaseRelation.scala:972)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.hbase.HBaseRelation.buildRow(HBaseRelation.scala:971)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anonfun$3.apply(HBaseSQLReaderRDD.scala:72)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:188)
at org.apache.spark.sql.hbase.HBaseSQLReaderRDD$$anon$1.next(HBaseSQLReaderRDD.scala:170)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
astro> exit;
15/10/08 15:39:40 INFO spark.SparkContext: Invoking stop() from shutdown hook
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
15/10/08 15:39:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
15/10/08 15:39:40 INFO ui.SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
15/10/08 15:39:40 INFO scheduler.DAGScheduler: Stopping DAGScheduler
15/10/08 15:39:40 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/10/08 15:39:40 INFO util.Utils: path = /tmp/spark-5e84c9ec-e1b7-4f12-a466-f035c0ca6e7b/blockmgr-1e69a927-6ecd-473f-8897-b5bfa0f4ffe3, already present as root for deletion.
15/10/08 15:39:40 INFO storage.MemoryStore: MemoryStore cleared
15/10/08 15:39:40 INFO storage.BlockManager: BlockManager stopped
15/10/08 15:39:40 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
15/10/08 15:39:40 INFO spark.SparkContext: Successfully stopped SparkContext
15/10/08 15:39:40 INFO util.Utils: Shutdown hook called
15/10/08 15:39:40 INFO util.Utils: Deleting directory /tmp/spark-5e84c9ec-e1b7-4f12-a466-f035c0ca6e7b
15/10/08 15:39:40 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

Add SparkHbase as a package in SparkSql

Currently I have set up HbaseSparkSql in a separate folder and Spark in a separate folder.It works fine if I enter HbaseSparkSql/bin & execute commands.
How can I add SparkHbase package in SparkSql so that I can see SparkHbase inside Spark/bin.

The LOAD function cannot load data from file

Hello, I have a question about the function of LOAD. I want to load the data which was stored as TEXT file under the path of hdfs, so i did as follows:
astro> select * from sql_zsy05; //the table before my data is loaded
OK
+----+----+----+------+
|key1|key2|key3| value|
+----+----+----+------+
| 9| 9| 7|hidfla|
| 9| 9| 8|hidfla|
| 9| 9| 9|hidfla|
+----+----+----+------+

Time taken: 6.559 seconds
astro> LOAD DATA INPATH "hdfs://10.67.180.114:9000/TEST" OVERWRITE INTO TABLE sql_zsy05 FIELDS TERMINATED BY '\t';
OK
Time taken: 7.796 seconds
astro> select * from sql_zsy05; // after the data is loaded successful, the table has no change
OK
+----+----+----+------+
|key1|key2|key3| value|
+----+----+----+------+
| 9| 9| 7|hidfla|
| 9| 9| 8|hidfla|
| 9| 9| 9|hidfla|
+----+----+----+------+

So, what should I do? Did I used the method incorrectly?Thank you!

Detailed Documentation

Can I get a detailed documentation of different SQL statements and its usage.
Like :
Does it support view?
Does it support Index?
Does it support Stored Procedures?
If yes can I get a syntax. I am looking for usage document if any for testing/evaluating sparksqlonhbase.

HBasePartition not found in yarn cluster

在 spark 以 yarn-clinet 方式提交到 yarn 集群时, 尝试各种方式将 spark-sql-on-hbase-1.0.0.jar 上传到 executor. 确认该 jar 文件已经存在于 nodemanager 机器上, 但是还是报错:

15/07/30 19:05:05 INFO DAGScheduler: Job 3 failed: main at NativeMethodAccessorImpl.java:-2, took 0.088777 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 12, szwg-scloud-2015q2-c32-s
u04.szwg01.baidu.com): java.lang.ClassNotFoundException: org.apache.spark.sql.hbase.HBasePartition
        at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:66)

Exception in thread "main" java.lang.Exception: The logical table: <name> already exists

Hi

We are using spark-1.5-alpha branch to test the spark-sql-on-hbase with hbase.
Everytime we run the program after the first successful execution we get the below error.

Exception in thread "main" java.lang.Exception: The logical table: ips12 already exists
at org.apache.spark.sql.hbase.HBaseCatalog.createTable(HBaseCatalog.scala:178)
at org.apache.spark.sql.hbase.HBaseSource.createRelation(HBaseRelation.scala:79)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
at Main.main(Main.java:47)

What does this mean? Do we have to provide a new tableName alias everytime we execute our program?

The program is given below, any advice is much appreciated.

public static void main(String[] args) throws JobExecutionException {
JavaSparkContext jsc = new JavaSparkContext("local[2]", "HbaseTest");
HBaseSQLContext hbaseCtx = new HBaseSQLContext(jsc);
Map<String, String> options = new HashMap<>();
options.put("namespace", "");
options.put("tableName", "ips12");
options.put("hbaseTableName", "hbase_table");
options.put("colsSeq", "id,Name,port,longitude,latitude,devicetype,community,performance,fault,status,ip,location,Sample");
options.put("keyCols", "id,string");
options.put("encodingFormat", "utf-8");
options.put("nonKeyCols", "Name,string,dcf,Name;port,string,dcf,port;longitude,string,dcf,longitude;latitude,string,dcf,latitude;devicetype,string,dcf,deviceType;community,string,dcf,community;performance,string,dcf,performance;fault,string,dcf,fault;status,string,dcf,status;ip,string,dcf,ip;location,string,dcf,location;Sample,string,dcf,Sample");
hbaseCtx.read().format("org.apache.spark.sql.hbase.HBaseSource").options(options).load();
hbaseCtx.sql("select * from ips12").orderBy(new Column("Name").desc()).show();
}

Test Fails

HBaseTpcStringFormatMiniTestSuite fails 4 test cases - #7, #8, #21, #22.
master branch.

Do you guys have any plans to move this to HBase API 1.0.x?

Hi Folks,

I am curious if you guys have any plans to migrate the HBase version from 0.98.x to HBase 1.0.x? Since this change brings in breaking changes I am curious if there is any effort already made in this direction.

Regards,
Atul.

Problem in reading integer value

I write an int value (the 21 in this case) into HBase through the HBase's Java API as follow:

Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "table_name");
Put put = new Put(Bytes.toBytes("1"));
put.addImmutable(Bytes.toBytes("columnFamily"), Bytes.toBytes("int_column"), Bytes.toBytes(21));
table.put(put);
table.flushCommits();
table.close();

Then i read it through the connector as follow:

hbaseCtx.read().format("org.apache.spark.sql.hbase.HBaseSource")
                .option("namespace", "")
                .option("tableName", "table_name")
                .option("hbaseTableName", "table_name")
                .option("encodingFormat", "")
                .option("colsSeq", "row_key,int_column")
                .option("keyCols", "row_key,string")
                .option("nonKeyCols", "int_column,int,columnFamily,int_column")
                .load();

DataFrame df_table = hbaseCtx.table(tableName);

But i can't figure out why when i try to print it out with the df_table.show() function i don't get the 21 instead i get the -2147483627 value.

1

//变化原因在于0.98和1.0.0接口变化,如下:
//===========================
//第一:社区0.98代码地址:
//https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java
//社区0.98InternalScanner.java下boolean next(List result, int limit) throws IOException;
//社区0.98RegionScanner.java下:没有int getBatch(),且boolean nextRaw(List result, int limit) throws IOException;
//===========================

//===========================
//第三:社区branch1代码地址:(master代码也一样)
//https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java
//社区branch1中InternalScanner.java下:boolean next(List result, ScannerContext scannerContext) throws IOException;
//社区branch1中RegionScanner.java下:int getBatch()以及boolean nextRaw(List result, ScannerContext scannerContext) throws IOException;
//=============================

package org.apache.spark.sql.hbase

import org.apache.hadoop.hbase._
import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.coprocessor._
import org.apache.hadoop.hbase.regionserver._
import org.apache.hadoop.hbase.util.Bytes
import org.apache.log4j.Logger
import org.apache.spark._
import org.apache.spark.executor.TaskMetrics
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate
import org.apache.spark.sql.hbase.util.DataTypeUtils
import org.apache.spark.sql.types._
import org.apache.spark.sql.{Row, SQLContext}

/**

  • HBaseCoprocessorSQLReaderRDD:
    */
    class HBaseCoprocessorSQLReaderRDD(var relation: HBaseRelation,
    val codegenEnabled: Boolean,
    var finalOutput: Seq[Attribute],
    var otherFilters: Option[Expression],
    @transient sqlContext: SQLContext)
    extends RDD[Row](sqlContext.sparkContext, Nil) with Logging {

    @transient var scanner: RegionScanner = _

    private def createIterator(context: TaskContext): Iterator[Row] = {
    val otherFilter: (Row) => Boolean = {
    if (otherFilters.isDefined) {
    if (codegenEnabled) {
    GeneratePredicate.generate(otherFilters.get, finalOutput)
    } else {
    InterpretedPredicate.create(otherFilters.get, finalOutput)
    }
    } else null
    }

    val projections = finalOutput.zipWithIndex
    var finished: Boolean = false
    var gotNext: Boolean = false
    val results: java.util.ArrayList[Cell] = new java.util.ArrayListCell
    val row = new GenericMutableRow(finalOutput.size)

    val iterator = new Iterator[Row] {
    override def hasNext: Boolean = {
    if (!finished) {
    if (!gotNext) {
    results.clear()
    scanner.nextRaw(results)
    finished = results.isEmpty
    gotNext = true
    }
    }
    if (finished) {
    close()
    }
    !finished
    }

    override def next(): Row = {
      if (hasNext) {
        gotNext = false
        relation.buildRowInCoprocessor(projections, results, row)
      } else {
        null
      }
    }
    
    def close() = {
      try {
        scanner.close()
        relation.closeHTable()
      } catch {
        case e: Exception => logWarning("Exception in scanner.close", e)
      }
    }
    

    }

    if (otherFilter == null) {
    new InterruptibleIterator(context, iterator)
    } else {
    new InterruptibleIterator(context, iterator.filter(otherFilter))
    }
    }

    override def getPartitions: Array[Partition] = {
    Array()
    }

    override def compute(split: Partition, context: TaskContext): Iterator[Row] = {
    scanner = split.asInstanceOf[HBasePartition].newScanner
    createIterator(context)
    }
    }

abstract class BaseRegionScanner extends RegionScanner {
override def getBatch={0} //实现这个从接口中继承的函数
//新版本的hbase中在RegionScanner中添加了一个这样一个函数:int getBatch();
//但是这个函数在Astro继承过来之后没有用,而实例化后面的类的时候又不得不实现继承过来的函数
//所以仅是实现它防止编译报错,在Astro中并没有作用。
override def isFilterDone = false

override def next(result: java.util.List[Cell], scannerContext: ScannerContext)= next(result)//这里把limit: Int=>scannerContext: ScannerContext
//next函数上层继承自hbase的RegionScanner,再上层继承自InternalScanner
//在0.98版本中next函数boolean next(List result, int limit) throws IOException;
//在新版本中boolean next(List result, ScannerContext scannerContext) throws IOException;

override def reseek(row: Array[Byte]) = throw new DoNotRetryIOException("Unsupported")

override def getMvccReadPoint = Long.MaxValue

override def nextRaw(result: java.util.List[Cell]) = next(result)

override def nextRaw(result: java.util.List[Cell], scannerContext: ScannerContext) = next(result, scannerContext)//这里把limit: Int=>scannerContext: ScannerContext
//这里对比两个版本HBase中RegionScanner的区别:
//0.98版HBase这里定义为boolean nextRaw(List result, int limit) throws IOException;
//而新版这里定义为 boolean nextRaw(List result, ScannerContext scannerContext) throws IOException;
}

class SparkSqlRegionObserver extends BaseRegionObserver {
lazy val logger = Logger.getLogger(getClass.getName)
lazy val EmptyArray = ArrayByte

override def postScannerOpen(e: ObserverContext[RegionCoprocessorEnvironment],
scan: Scan,
s: RegionScanner) = {
val serializedPartitionIndex = scan.getAttribute(CoprocessorConstants.COINDEX)
if (serializedPartitionIndex == null) {
logger.debug("Work without coprocessor")
super.postScannerOpen(e, scan, s)
} else {
logger.debug("Work with coprocessor")
val partitionIndex: Int = Bytes.toInt(serializedPartitionIndex)
val serializedOutputDataType = scan.getAttribute(CoprocessorConstants.COTYPE)
val outputDataType: Seq[DataType] =
HBaseSerializer.deserialize(serializedOutputDataType).asInstanceOf[Seq[DataType]]

  val serializedRDD = scan.getAttribute(CoprocessorConstants.COKEY)
  val subPlanRDD: RDD[Row] = HBaseSerializer.deserialize(serializedRDD).asInstanceOf[RDD[Row]]

  val taskParaInfo = scan.getAttribute(CoprocessorConstants.COTASK)
  val (stageId, partitionId, taskAttemptId, attemptNumber) =
    HBaseSerializer.deserialize(taskParaInfo).asInstanceOf[(Int, Int, Long, Int)]
  val taskContext = new TaskContextImpl(
    stageId, partitionId, taskAttemptId, attemptNumber, null, false, new TaskMetrics)

  val regionInfo = s.getRegionInfo
  val startKey = if (regionInfo.getStartKey.isEmpty) None else Some(regionInfo.getStartKey)
  val endKey = if (regionInfo.getEndKey.isEmpty) None else Some(regionInfo.getEndKey)

  val result = subPlanRDD.compute(
    new HBasePartition(partitionIndex, partitionIndex, startKey, endKey, newScanner = s),
    taskContext)

  new BaseRegionScanner() {
    override def getRegionInfo: HRegionInfo = regionInfo

    override def getMaxResultSize: Long = s.getMaxResultSize

    override def close(): Unit = s.close()

    override def next(results: java.util.List[Cell]): Boolean = {
      val hasMore: Boolean = result.hasNext
      if (hasMore) {
        val nextRow = result.next()
        val numOfCells = outputDataType.length
        for (i <- 0 until numOfCells) {
          val data = nextRow(i)
          val dataType = outputDataType(i)
          val dataOfBytes: HBaseRawType = {
            if (data == null) null else DataTypeUtils.dataToBytes(data, dataType)
          }
          results.add(new KeyValue(EmptyArray, EmptyArray, EmptyArray, dataOfBytes))
        }
      }
      hasMore
    }
  }
}

}
}

Spark-SQL-on-HBase-hbase_branch_1.1BUILD ERROR

➜ Spark-SQL-on-HBase-hbase_branch_1.1 mvn -DskipTests clean install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Spark Project HBase 1.0.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-sql-on-hbase ---
[INFO]
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ spark-sql-on-hbase ---
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-sql-on-hbase ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/liuluheng/Downloads/Spark-SQL-on-HBase-hbase_branch_1.1/conf
[INFO] Copying 3 resources
[INFO]
[INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ spark-sql-on-hbase ---
[WARNING] Expected all dependencies to require Scala version: 2.10.4
[WARNING] com.twitter:chill_2.10:0.5.0 requires scala version: 2.10.4
[WARNING] org.spark-project.akka:akka-remote_2.10:2.3.4-spark requires scala version: 2.10.4
[WARNING] org.spark-project.akka:akka-actor_2.10:2.3.4-spark requires scala version: 2.10.4
[WARNING] org.spark-project.akka:akka-slf4j_2.10:2.3.4-spark requires scala version: 2.10.4
[WARNING] org.apache.spark:spark-core_2.10:1.4.0 requires scala version: 2.10.4
[WARNING] org.json4s:json4s-jackson_2.10:3.2.10 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] Using incremental compilation
[INFO] compiler plugin: BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
[INFO] Compiling 34 Scala sources and 2 Java sources to /home/liuluheng/Downloads/Spark-SQL-on-HBase-hbase_branch_1.1/target/scala-2.10/classes...
[WARNING] /home/liuluheng/Downloads/Spark-SQL-on-HBase-hbase_branch_1.1/src/main/scala/org/apache/spark/sql/hbase/CheckDirEndPointImpl.scala:49: a pure expression does nothing in statement position; you may be omi
tting necessary parentheses
[WARNING] case e: RegionCoprocessorEnvironment => e
[WARNING] ^
[ERROR] /home/liuluheng/Downloads/Spark-SQL-on-HBase-hbase_branch_1.1/src/main/scala/org/apache/spark/sql/hbase/SparkSqlRegionObserver.scala:121: not found: type ScannerContext
[ERROR] override def next(result: java.util.List[Cell], scannerContext: ScannerContext)= next(result)// limit: Int=>scannerContext: ScannerContext
[ERROR] ^
[ERROR] /home/liuluheng/Downloads/Spark-SQL-on-HBase-hbase_branch_1.1/src/main/scala/org/apache/spark/sql/hbase/SparkSqlRegionObserver.scala:129: not found: type ScannerContext
[ERROR] override def nextRaw(result: java.util.List[Cell], scannerContext: ScannerContext) = next(result, scannerContext) //limit: Int=>scannerContext: ScannerContext
[ERROR] ^
[WARNING] one warning found
[ERROR] two errors found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.357s
[INFO] Finished at: Mon Nov 02 18:24:50 CST 2015
[INFO] Final Memory: 28M/420M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project spark-sql-on-hbase: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:3
.2.0:compile failed. CompileFailed -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.