Giter Club home page Giter Club logo

hadoop-mini-clusters's Introduction

hadoop-mini-clusters

hadoop-mini-clusters provides an easy way to test Hadoop projects directly in your IDE, without the need for a full blown development cluster or container orchestration. It allows the user to debug with the full power of the IDE. It provides a consistent API around the existing Mini Clusters across the ecosystem, eliminating the tedious task of learning the nuances of each project's approach.

Coverage Status

Modules:

The project structure changed with 0.1.0. Each mini cluster now resides in a module of its own. See the module names below.

Modules Included:

  • hadoop-mini-clusters-hdfs - Mini HDFS Cluster
  • hadoop-mini-clusters-yarn - Mini YARN Cluster (no MR)
  • hadoop-mini-clusters-mapreduce - Mini MapReduce Cluster
  • hadoop-mini-clusters-hbase - Mini HBase Cluster
  • hadoop-mini-clusters-zookeeper - Curator based Local Cluster
  • hadoop-mini-clusters-hiveserver2 - Local HiveServer2 instance
  • hadoop-mini-clusters-hivemetastore - Derby backed HiveMetaStore
  • hadoop-mini-clusters-storm - Storm LocalCluster
  • hadoop-mini-clusters-kafka - Local Kafka Broker
  • hadoop-mini-clusters-oozie - Local Oozie Server - Thanks again Vladimir
  • hadoop-mini-clusters-mongodb - I know... not Hadoop
  • hadoop-mini-clusters-activemq - Thanks Vladimir Zlatkin!
  • hadoop-mini-clusters-hyperscaledb - For testing various databases
  • hadoop-mini-clusters-knox - Local Knox Gateway
  • hadoop-mini-clusters-kdc - Local Key Distribution Center (KDC)

Tests:

Tests are included to show how to configure and use each of the mini clusters. See the *IntegrationTest classes.

Using:

  • Maven Central - latest release
<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters</artifactId>
    <version>0.1.16</version>
</dependency>

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-common</artifactId>
    <version>0.1.16</version>
</dependency>

Profile Support:

Multiple versions of HDP are available. The current list is:

  • HDP 2.6.5.0 (default)
  • HDP 2.6.3.0
  • HDP 2.6.2.0
  • HDP 2.6.1.0
  • HDP 2.6.0.3
  • HDP 2.5.3.0
  • HDP 2.5.0.0
  • HDP 2.4.2.0
  • HDP 2.4.0.0
  • HDP 2.3.4.0
  • HDP 2.3.2.0
  • HDP 2.3.0.0

To use a different profiles, add the profile name to your maven build:

mvn test -P2.3.0.0

Note that backwards compatibility is not guarenteed.

Examples:

HDFS Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-hdfs</artifactId>
    <version>0.1.16</version>
</dependency>
HdfsLocalCluster hdfsLocalCluster = new HdfsLocalCluster.Builder()
    .setHdfsNamenodePort(12345)
    .setHdfsNamenodeHttpPort(12341)
    .setHdfsTempDir("embedded_hdfs")
    .setHdfsNumDatanodes(1)
    .setHdfsEnablePermissions(false)
    .setHdfsFormat(true)
    .setHdfsEnableRunningUserAsProxyUser(true)
    .setHdfsConfig(new Configuration())
    .build();
                
hdfsLocalCluster.start();

YARN Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-yarn</artifactId>
    <version>0.1.16</version>
</dependency>
YarnLocalCluster yarnLocalCluster = new YarnLocalCluster.Builder()
    .setNumNodeManagers(1)
    .setNumLocalDirs(Integer.parseInt(1)
    .setNumLogDirs(Integer.parseInt(1)
    .setResourceManagerAddress("localhost:37001")
    .setResourceManagerHostname("localhost")
    .setResourceManagerSchedulerAddress("localhost:37002")
    .setResourceManagerResourceTrackerAddress("localhost:37003")
    .setResourceManagerWebappAddress("localhost:37004")
    .setUseInJvmContainerExecutor(false)
    .setConfig(new Configuration())
    .build();
   
yarnLocalCluster.start();

MapReduce Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-mapreduce</artifactId>
    <version>0.1.16</version>
</dependency>
MRLocalCluster mrLocalCluster = new MRLocalCluster.Builder()
    .setNumNodeManagers(1)
    .setJobHistoryAddress("localhost:37005")
    .setResourceManagerAddress("localhost:37001")
    .setResourceManagerHostname("localhost")
    .setResourceManagerSchedulerAddress("localhost:37002")
    .setResourceManagerResourceTrackerAddress("localhost:37003")
    .setResourceManagerWebappAddress("localhost:37004")
    .setUseInJvmContainerExecutor(false)
    .setConfig(new Configuration())
    .build();

mrLocalCluster.start();

HBase Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-hbase</artifactId>
    <version>0.1.16</version>
</dependency>
HbaseLocalCluster hbaseLocalCluster = new HbaseLocalCluster.Builder()
    .setHbaseMasterPort(25111)
    .setHbaseMasterInfoPort(-1)
    .setNumRegionServers(1)
    .setHbaseRootDir("embedded_hbase")
    .setZookeeperPort(12345)
    .setZookeeperConnectionString("localhost:12345")
    .setZookeeperZnodeParent("/hbase-unsecure")
    .setHbaseWalReplicationEnabled(false)
    .setHbaseConfiguration(new Configuration())
    .activeRestGateway()
        .setHbaseRestHost("localhost")
        .setHbaseRestPort(28000)
        .setHbaseRestReadOnly(false)
        .setHbaseRestThreadMax(100)
        .setHbaseRestThreadMin(2)
        .build()
    .build();

hbaseLocalCluster.start();

Zookeeper Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-zookeeper</artifactId>
    <version>0.1.16</version>
</dependency>
ZookeeperLocalCluster zookeeperLocalCluster = new ZookeeperLocalCluster.Builder()
    .setPort(12345)
    .setTempDir("embedded_zookeeper")
    .setZookeeperConnectionString("localhost:12345")
    .setMaxClientCnxns(60)
    .setElectionPort(20001)
    .setQuorumPort(20002)
    .setDeleteDataDirectoryOnClose(false)
    .setServerId(1)
    .setTickTime(2000)
    .build();

zookeeperLocalCluster.start();

HiveServer2 Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-hiveserver2</artifactId>
    <version>0.1.16</version>
</dependency>
HiveLocalServer2 hiveLocalServer2 = new HiveLocalServer2.Builder()
    .setHiveServer2Hostname("localhost")
    .setHiveServer2Port(12348)
    .setHiveMetastoreHostname("localhost")
    .setHiveMetastorePort(12347)
    .setHiveMetastoreDerbyDbDir("metastore_db")
    .setHiveScratchDir("hive_scratch_dir")
    .setHiveWarehouseDir("warehouse_dir")
    .setHiveConf(new HiveConf())
    .setZookeeperConnectionString("localhost:12345")
    .build();

hiveLocalServer2.start();

HiveMetastore Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-hivemetastore</artifactId>
    <version>0.1.16</version>
</dependency>
HiveLocalMetaStore hiveLocalMetaStore = new HiveLocalMetaStore.Builder()
    .setHiveMetastoreHostname("localhost")
    .setHiveMetastorePort(12347)
    .setHiveMetastoreDerbyDbDir("metastore_db")
    .setHiveScratchDir("hive_scratch_dir")
    .setHiveWarehouseDir("warehouse_dir")
    .setHiveConf(new HiveConf())
    .build();

hiveLocalMetaStore.start();

Storm Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-storm</artifactId>
    <version>0.1.16</version>
</dependency>
StormLocalCluster stormLocalCluster = new StormLocalCluster.Builder()
    .setZookeeperHost("localhost")
    .setZookeeperPort(12345)
    .setEnableDebug(true)
    .setNumWorkers(1)
    .setStormConfig(new Config())
    .build();

stormLocalCluster.start();

Kafka Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-kafka</artifactId>
    <version>0.1.16</version>
</dependency>
KafkaLocalBroker kafkaLocalBroker = new KafkaLocalBroker.Builder()
    .setKafkaHostname("localhost")
    .setKafkaPort(11111)
    .setKafkaBrokerId(0)
    .setKafkaProperties(new Properties())
    .setKafkaTempDir("embedded_kafka")
    .setZookeeperConnectionString("localhost:12345")
    .build();

kafkaLocalBroker.start();

Oozie Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-oozie</artifactId>
    <version>0.1.16</version>
</dependency>
OozieLocalServer oozieLocalServer = new OozieLocalServer.Builder()
    .setOozieTestDir("embedded_oozie")
    .setOozieHomeDir("oozie_home")
    .setOozieUsername(System.getProperty("user.name"))
    .setOozieGroupname("testgroup")
    .setOozieYarnResourceManagerAddress("localhost")
    .setOozieHdfsDefaultFs("hdfs://localhost:8020/")
    .setOozieConf(new Configuration())
    .setOozieHdfsShareLibDir("/tmp/oozie_share_lib")
    .setOozieShareLibCreate(Boolean.TRUE)
    .setOozieLocalShareLibCacheDir("share_lib_cache")
    .setOoziePurgeLocalShareLibCache(Boolean.FALSE)
    .setOozieShareLibFrameworks(
        Lists.newArrayList(Framework.MAPREDUCE_STREAMING, Framework.OOZIE))
    .build();

OozieShareLibUtil oozieShareLibUtil = new OozieShareLibUtil(
    oozieLocalServer.getOozieHdfsShareLibDir(),
    oozieLocalServer.getOozieShareLibCreate(), 
    oozieLocalServer.getOozieLocalShareLibCacheDir(),
    oozieLocalServer.getOoziePurgeLocalShareLibCache(), 
    hdfsLocalCluster.getHdfsFileSystemHandle(),
    oozieLocalServer.getOozieShareLibFrameworks());
oozieShareLibUtil.createShareLib();

oozieLocalServer.start();

MongoDB Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-mongodb</artifactId>
    <version>0.1.16</version>
</dependency>
MongodbLocalServer mongodbLocalServer = new MongodbLocalServer.Builder()
    .setIp("127.0.0.1")
    .setPort(11112)
    .build();

mongodbLocalServer.start();

ActiveMQ Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-activemq</artifactId>
    <version>0.1.16</version>
</dependency>
ActivemqLocalBroker amq = new ActivemqLocalBroker.Builder()
    .setHostName("localhost")
    .setPort(11113)
    .setQueueName("defaultQueue")
    .setStoreDir("activemq-data")
    .setUriPrefix("vm://")
    .setUriPostfix("?create=false")
    .build();

amq.start();

HyperSQL DB Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-hyperscaledb</artifactId>
    <version>0.1.16</version>
</dependency>
hsqldbLocalServer = new HsqldbLocalServer.Builder()
    .setHsqldbHostName("127.0.0.1")
    .setHsqldbPort("44111")
    .setHsqldbTempDir("embedded_hsqldb")
    .setHsqldbDatabaseName("testdb")
    .setHsqldbCompatibilityMode("mysql")
    .setHsqldbJdbcDriver("org.hsqldb.jdbc.JDBCDriver")
    .setHsqldbJdbcConnectionStringPrefix("jdbc:hsqldb:hsql://")
    .build();

hsqldbLocalServer.start();

Knox Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-knox</artifactId>
    <version>0.1.16</version>
</dependency>
KnoxLocalCluster knoxCluster = new KnoxLocalCluster.Builder()
    .setPort(8888)
    .setPath("gateway")
    .setHomeDir("embedded_knox")
    .setCluster("mycluster")
    .setTopology(XMLDoc.newDocument(true)
        .addRoot("topology")
            .addTag("gateway")
                .addTag("provider")
                    .addTag("role").addText("authentication")
                    .addTag("enabled").addText("false")
                    .gotoParent()
                .addTag("provider")
                    .addTag("role").addText("identity-assertion")
                    .addTag("enabled").addText("false")
                    .gotoParent()
                .gotoParent()
            .addTag("service")
                .addTag("role").addText("NAMENODE")
                .addTag("url").addText("hdfs://localhost:8020")
                .gotoParent()
            .addTag("service")
                .addTag("role").addText("WEBHDFS")
                .addTag("url").addText("http://localhost:50070/webhdfs")
        .gotoRoot().toString())
        .build();

knoxCluster.start();

KDC Example

<dependency>
    <groupId>com.github.sakserv</groupId>
    <artifactId>hadoop-mini-clusters-kdc</artifactId>
    <version>0.1.16</version>
</dependency>
KdcLocalCluster kdcLocalCluster = new KdcLocalCluster.Builder()
        .setPort(34340)
        .setHost("127.0.0.1")
        .setBaseDir("embedded_kdc")
        .setOrgDomain("ORG")
        .setOrgName("ACME")
        .setPrincipals("hdfs,hbase,yarn,oozie,oozie_user,zookeeper,storm,mapreduce,HTTP".split(","))
        .setKrbInstance("127.0.0.1")
        .setInstance("DefaultKrbServer")
        .setTransport("TCP")
        .setMaxTicketLifetime(86400000)
        .setMaxRenewableLifetime(604800000)
        .setDebug(false)
        .build();
kdcLocalCluster.start();

Find how to integrate KDC with HDFS, Zookeeper or HBase in the tests under hadoop-mini-clusters-kdc/src/test/java/com/github/sakserv/minicluster/impl

Modifying Properties

To change the defaults used to construct the mini clusters, modify src/main/java/resources/default.properties as needed.

Intellij Testing

If you desire running the full test suite from Intellij, make sure Fork Mode is set to method (Run -> Edit Configurations -> fork mode)

InJvmContainerExecutor

YarnLocalCluster now supports Oleg Z's InJvmContainerExecutor. See Oleg Z's Github for more.

hadoop-mini-clusters's People

Contributors

explicite avatar jetoile avatar jlleitschuh avatar llbg avatar sakserv avatar skumpf avatar timvw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hadoop-mini-clusters's Issues

java.lang.NoSuchMethodException

I'm getting the following exception while running hadoop-mini-clusters (HDFS, Yarn, Zookeeper, HS2 and Hive MetaServer) along with the 2.2.0 version of Spark, non-Hortonworks flavor.

java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:158)
at org.apache.spark.sql.hive.client.Shim_v1_2.loadDynamicPartitionsMethod$lzycompute(HiveShim.scala:796)
at org.apache.spark.sql.hive.client.Shim_v1_2.loadDynamicPartitionsMethod(HiveShim.scala:795)
at org.apache.spark.sql.hive.client.Shim_v1_2.loadDynamicPartitions(HiveShim.scala:831)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveClientImpl.scala:693)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:691)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadDynamicPartitions$1.apply(HiveClientImpl.scala:691)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:279)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:226)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:225)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:268)
at org.apache.spark.sql.hive.client.HiveClientImpl.loadDynamicPartitions(HiveClientImpl.scala:691)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(HiveExternalCatalog.scala:823)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:811)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadDynamicPartitions$1.apply(HiveExternalCatalog.scala:811)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at org.apache.spark.sql.hive.HiveExternalCatalog.loadDynamicPartitions(HiveExternalCatalog.scala:811)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:319)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:221)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:413)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:263)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:243)
at HiveTest.testSpark(HiveTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

This in all likelihood is being caused by Spark, but I was hoping there's an easy fix. Might be worth the different version of hive-exec being pulled in?

ERROR ShareLibService:517 - org.apache.oozie.service.ServiceException

Hi Shane,

First of all, I wanted to say thanks a lot for sharing/working on this project. This is the most needed project that we wanted for so long.

So, I am trying to use oozie project for our unit testing and came across following issue (I am using 0.1.11 version) -

2017-06-24 14:15:25 ERROR ShareLibService:517 - org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
at org.apache.oozie.service.ShareLibService.init(ShareLibService.java:132)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.local.LocalOozie.start(LocalOozie.java:64)
at com.github.sakserv.minicluster.impl.OozieLocalServer.start(OozieLocalServer.java:255)
at uk.co.nokia.ana.deployer.JobDeployerUnitTest.setUp(JobDeployerUnitTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost/tmp/share_lib, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:570)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
at org.apache.oozie.service.ShareLibService.getLatestLibPath(ShareLibService.java:657)
at org.apache.oozie.service.ShareLibService.updateShareLib(ShareLibService.java:525)
at org.apache.oozie.service.ShareLibService.init(ShareLibService.java:122)
... 23 more

Settings used are -

oozie.test.dir=embedded_oozie
oozie.home.dir=oozie
oozie.username=oozie
oozie.groupname=oozie
oozie.hdfs.share.lib.dir=/tmp/share_lib
oozie.share.lib.create=true
oozie.local.share.lib.cache.dir=./share_lib_cache
oozie.purge.local.share.lib.cache=true

& Code to initialize oozie server is -

oozieLocalServer = new OozieLocalServer.Builder()
.setOozieTestDir(propertyParser.getProperty(ConfigVars.OOZIE_TEST_DIR_KEY))
.setOozieHomeDir(propertyParser.getProperty(ConfigVars.OOZIE_HOME_DIR_KEY))
.setOozieUsername(System.getProperty("user.name"))
.setOozieGroupname(propertyParser.getProperty(ConfigVars.OOZIE_GROUPNAME_KEY))
.setOozieYarnResourceManagerAddress(propertyParser.getProperty(
ConfigVars.YARN_RESOURCE_MANAGER_ADDRESS_KEY))
.setOozieHdfsDefaultFs(hdfsLocalCluster.getHdfsConfig().get("fs.defaultFS"))
.setOozieConf(hdfsLocalCluster.getHdfsConfig())
.setOozieHdfsShareLibDir(propertyParser.getProperty(ConfigVars.OOZIE_HDFS_SHARE_LIB_DIR_KEY))
.setOozieShareLibCreate(Boolean.parseBoolean(
propertyParser.getProperty(ConfigVars.OOZIE_SHARE_LIB_CREATE_KEY)))
.setOozieLocalShareLibCacheDir(propertyParser.getProperty(
ConfigVars.OOZIE_LOCAL_SHARE_LIB_CACHE_DIR_KEY))
.setOoziePurgeLocalShareLibCache(Boolean.parseBoolean(propertyParser.getProperty(
ConfigVars.OOZIE_PURGE_LOCAL_SHARE_LIB_CACHE_KEY)))
.build();
oozieLocalServer.start();

Unable to create table with HDP 3.0 version

Unable to create table with HDP 3.0 version, getting below error
java.lang.noclassDefFoundError:org/apache/hadoop/hive/thrift/hadoopThriftAuthBridge
at com.github.sakserv.mimicluster.impl.hiveLocalMetaStore
$StartHiveLocalMetaStore.run(hivelocalMetaStore.java:167)
at java.lang.thread.run(Thread.java:748)
caused by:java.lanf.classNotFoundException:
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.net.ClassLoader.loadClass(ClassLoader.java:418)

Can you please update the mini-cluster .hive dependency jar with the latest one 3.0 . Thank you

Knox issues after upgrading to HDP 2.6.0.3

After upgrading packages to HDP 2.6.0.3. The following exception occurs with the Knox WebHDFS test which appears to be related to conflicting asm packages. All other modules are in good shape.

2017-04-30 07:45:17 WARN  QueuedThreadPool:610 - 
java.lang.IncompatibleClassChangeError: class org.eclipse.jetty.annotations.AnnotationParser$MyClassVisitor has interface org.objectweb.asm.ClassVisitor as super class
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.eclipse.jetty.annotations.AnnotationParser.scanClass(AnnotationParser.java:974)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJarEntry(AnnotationParser.java:956)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJar(AnnotationParser.java:909)
	at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:831)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$ParserTask.call(AnnotationConfiguration.java:164)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$1.run(AnnotationConfiguration.java:549)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
2017-04-30 07:45:17 WARN  QueuedThreadPool:610 - 
java.lang.IncompatibleClassChangeError: org/eclipse/jetty/annotations/AnnotationParser$MyClassVisitor
	at org.eclipse.jetty.annotations.AnnotationParser.scanClass(AnnotationParser.java:974)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJarEntry(AnnotationParser.java:956)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJar(AnnotationParser.java:909)
	at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:831)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$ParserTask.call(AnnotationConfiguration.java:164)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$1.run(AnnotationConfiguration.java:549)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
2017-04-30 07:45:17 WARN  QueuedThreadPool:610 - 
java.lang.IncompatibleClassChangeError: org/eclipse/jetty/annotations/AnnotationParser$MyClassVisitor
	at org.eclipse.jetty.annotations.AnnotationParser.scanClass(AnnotationParser.java:974)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJarEntry(AnnotationParser.java:956)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJar(AnnotationParser.java:909)
	at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:831)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$ParserTask.call(AnnotationConfiguration.java:164)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$1.run(AnnotationConfiguration.java:549)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
2017-04-30 07:45:17 WARN  QueuedThreadPool:610 - 
java.lang.IncompatibleClassChangeError: org/eclipse/jetty/annotations/AnnotationParser$MyClassVisitor
	at org.eclipse.jetty.annotations.AnnotationParser.scanClass(AnnotationParser.java:974)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJarEntry(AnnotationParser.java:956)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJar(AnnotationParser.java:909)
	at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:831)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$ParserTask.call(AnnotationConfiguration.java:164)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$1.run(AnnotationConfiguration.java:549)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
2017-04-30 07:45:17 WARN  QueuedThreadPool:610 - 
java.lang.IncompatibleClassChangeError: org/eclipse/jetty/annotations/AnnotationParser$MyClassVisitor
	at org.eclipse.jetty.annotations.AnnotationParser.scanClass(AnnotationParser.java:974)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJarEntry(AnnotationParser.java:956)
	at org.eclipse.jetty.annotations.AnnotationParser.parseJar(AnnotationParser.java:909)
	at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:831)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$ParserTask.call(AnnotationConfiguration.java:164)
	at org.eclipse.jetty.annotations.AnnotationConfiguration$1.run(AnnotationConfiguration.java:549)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
2017-04-30 07:45:17 WARN  QueuedThreadPool:617 - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@34ab7db7 in qtp733357076{STARTED,8<=8<=254,i=3,q=0}
2017-04-30 07:45:17 WARN  QueuedThreadPool:617 - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@34ab7db7 in qtp733357076{STARTED,8<=8<=254,i=3,q=0}
2017-04-30 07:45:17 WARN  QueuedThreadPool:617 - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@34ab7db7 in qtp733357076{STARTED,8<=8<=254,i=3,q=0}
2017-04-30 07:45:17 WARN  QueuedThreadPool:617 - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@34ab7db7 in qtp733357076{STARTED,8<=8<=254,i=3,q=0}
2017-04-30 07:45:17 WARN  QueuedThreadPool:617 - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@34ab7db7 in qtp733357076{STARTED,8<=8<=254,i=3,q=0}

Hbase reset info server is not stopped

I'm using hbase local cluster (same code from examples). When I try to stop hbase cluster and then start it again I get an exception saying java.net.BindException: Port in use: localhost:8085. So looks like InfoServer started in HbaseRestLocalCluster is not stopped when HbaseRestLocalCluster.stop() is called.

Additional note: HbaseLocalCluster.getHbaseRestLocalCluster() is public even though HbaseRestLocalCluster is package private.

Error when using Kafka and Zookeeper

Hello,

I have setup some unit test using the mini-clusters including Zookeeper and Kafka.
The process of launching clusters, creating topic and partition and sending message rolls out without error. Also the instantiation of consumers does not seem to raise any problem.
But when I tried to poll the message from the kafka partition I get a firsts stack that cause the broker to shutdown ...

2016-08-02 05:38:14 ERROR KafkaApis:103 - [KafkaApi-0] error when handling request null java.lang.ClassCastException: org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata cannot be cast to org.apache.kafka.common.requests.JoinGroupRequest$GroupProtocol at kafka.server.KafkaApis$$anonfun$37.apply(KafkaApis.scala:788) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.server.KafkaApis.handleJoinGroupRequest(KafkaApis.scala:788) at kafka.server.KafkaApis.handle(KafkaApis.scala:79) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60) at java.lang.Thread.run(Thread.java:745)

... and a second that witness a message that could not be sent because the broker is not up:

2016-08-02 05:38:15 INFO TestSendingReceiving:75 - /home/sam/IdeaProjects/affinytix-stream-kafka Unexpected error in join group response: The server experienced an unexpected error when processing the request org.apache.kafka.common.KafkaException: Unexpected error in join group response: The server experienced an unexpected error when processing the request at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:376) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:324) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) at com.affinytix.stream.kafka.consumer.partition.BasicPartitionConsumer.consumer(BasicPartitionConsumer.java:32) at com.affinytix.stream.kafka.consumer.receiver.MaxConsumerReceiver.process(MaxConsumerReceiver.java:52) at TestSendingReceiving$$anonfun$2.apply$mcV$sp(TestSendingReceiving.scala:118) at TestSendingReceiving$$anonfun$2.apply(TestSendingReceiving.scala:82) at TestSendingReceiving$$anonfun$2.apply(TestSendingReceiving.scala:82) at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22) at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.FlatSpecLike$$anon$1.apply(FlatSpecLike.scala:1647) at org.scalatest.Suite$class.withFixture(Suite.scala:1122) at TestSendingReceiving.withFixture(TestSendingReceiving.scala:59) at org.scalatest.FlatSpecLike$class.invokeWithFixture$1(FlatSpecLike.scala:1644) at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656) at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) at org.scalatest.FlatSpecLike$class.runTest(FlatSpecLike.scala:1656) at TestSendingReceiving.org$scalatest$BeforeAndAfter$$super$runTest(TestSendingReceiving.scala:17) at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200) at TestSendingReceiving.runTest(TestSendingReceiving.scala:17) at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714) at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714) at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413) at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401) at scala.collection.immutable.List.foreach(List.scala:318) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:390) at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:427) at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401) at scala.collection.immutable.List.foreach(List.scala:318) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483) at org.scalatest.FlatSpecLike$class.runTests(FlatSpecLike.scala:1714) at org.scalatest.FlatSpec.runTests(FlatSpec.scala:1683) at org.scalatest.Suite$class.run(Suite.scala:1424) at org.scalatest.FlatSpec.org$scalatest$FlatSpecLike$$super$run(FlatSpec.scala:1683) at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760) at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760) at org.scalatest.SuperEngine.runImpl(Engine.scala:545) at org.scalatest.FlatSpecLike$class.run(FlatSpecLike.scala:1760) at TestSendingReceiving.org$scalatest$BeforeAndAfter$$super$run(TestSendingReceiving.scala:17) at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241) at TestSendingReceiving.run(TestSendingReceiving.scala:17) at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55) at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563) at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557) at scala.collection.immutable.List.foreach(List.scala:318) at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557) at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044) at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043) at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722) at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043) at org.scalatest.tools.Runner$.run(Runner.scala:883) at org.scalatest.tools.Runner.run(Runner.scala) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

Any idea to help ?

Many thanks

The whole test roll out here:

/usr/lib/jvm/java-8-oracle/bin/java -Didea.launcher.port=7532 -Didea.launcher.bin.path=/opt/idea-IC-162.1121.32/bin -Dfile.encoding=UTF-8 -classpath /home/sam/.IdeaIC2016.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/test-classes:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/classes:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-library.jar:/home/sam/.ivy2/cache/xml-apis/xml-apis/jars/xml-apis-1.3.04.jar:/home/sam/.ivy2/cache/xmlenc/xmlenc/jars/xmlenc-0.52.jar:/home/sam/.ivy2/cache/xerces/xercesImpl/jars/xercesImpl-2.9.1.jar:/home/sam/.ivy2/cache/org.sonatype.sisu.inject/cglib/jars/cglib-2.2.1-v20090111.jar:/home/sam/.ivy2/cache/org.fusesource.leveldbjni/leveldbjni-all/bundles/leveldbjni-all-1.8.jar:/home/sam/.ivy2/cache/org.codehaus.jettison/jettison/bundles/jettison-1.1.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-xc/jars/jackson-xc-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-jaxrs/jars/jackson-jaxrs-1.9.13.jar:/home/sam/.ivy2/cache/org.apache.zookeeper/zookeeper/jars/zookeeper-3.4.6.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpcore/jars/httpcore-4.2.4.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.2.5.jar:/home/sam/.ivy2/cache/org.apache.htrace/htrace-core/jars/htrace-core-3.1.0-incubating.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-server-common/jars/hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-registry/jars/hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-common/jars/hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-client/jars/hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/jars/hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/jars/hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-jobclient/jars/hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-core/jars/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-common/jars/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-app/jars/hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/jars/hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-client/jars/hadoop-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-annotations/jars/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-kerberos-codec/bundles/apacheds-kerberos-codec-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-i18n/bundles/apacheds-i18n-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-util/bundles/api-util-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-asn1-api/bundles/api-asn1-api-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-recipes/bundles/curator-recipes-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-framework/bundles/curator-framework-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-client/bundles/curator-client-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.1.1.jar:/home/sam/.ivy2/cache/log4j/log4j/bundles/log4j-1.2.17.jar:/home/sam/.ivy2/cache/junit/junit/jars/junit-3.8.1.jar:/home/sam/.ivy2/cache/javax.xml.stream/stax-api/jars/stax-api-1.0-2.jar:/home/sam/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.2.2.jar:/home/sam/.ivy2/cache/javax.servlet.jsp/jsp-api/jars/jsp-api-2.1.jar:/home/sam/.ivy2/cache/javax.servlet/servlet-api/jars/servlet-api-2.5.jar:/home/sam/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:/home/sam/.ivy2/cache/javax.activation/activation/jars/activation-1.1.jar:/home/sam/.ivy2/cache/io.netty/netty-all/jars/netty-all-4.0.23.Final.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.7.0.Final.jar:/home/sam/.ivy2/cache/commons-net/commons-net/jars/commons-net-3.1.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.3.jar:/home/sam/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar:/home/sam/.ivy2/cache/commons-digester/commons-digester/jars/commons-digester-1.8.jar:/home/sam/.ivy2/cache/commons-configuration/commons-configuration/jars/commons-configuration-1.6.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.2.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils-core/jars/commons-beanutils-core-1.8.0.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils/jars/commons-beanutils-1.7.0.jar:/home/sam/.ivy2/cache/com.sun.xml.bind/jaxb-impl/jars/jaxb-impl-2.2.3-1.jar:/home/sam/.ivy2/cache/com.sun.jersey.contribs/jersey-guice/jars/jersey-guice-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-server/bundles/jersey-server-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-json/bundles/jersey-json-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-core/bundles/jersey-core-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-client/bundles/jersey-client-1.9.jar:/home/sam/.ivy2/cache/com.squareup.okio/okio/jars/okio-1.4.0.jar:/home/sam/.ivy2/cache/com.squareup.okhttp/okhttp/jars/okhttp-2.4.0.jar:/home/sam/.ivy2/cache/com.microsoft.windowsazure.storage/microsoft-windowsazure-storage-sdk/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/home/sam/.ivy2/cache/com.jamesmurty.utils/java-xmlbuilder/jars/java-xmlbuilder-0.4.jar:/home/sam/.ivy2/cache/com.google.protobuf/protobuf-java/bundles/protobuf-java-2.5.0.jar:/home/sam/.ivy2/cache/com.google.inject/guice/jars/guice-3.0.jar:/home/sam/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.2.4.jar:/home/sam/.ivy2/cache/com.google.code.findbugs/jsr305/jars/jsr305-3.0.0.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-common/jars/hadoop-mini-clusters-common-0.1.7.jar:/home/sam/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/jars/jackson-core-2.2.3.jar:/home/sam/.ivy2/cache/asm/asm/jars/asm-3.1.jar:/home/sam/.ivy2/cache/aopalliance/aopalliance/jars/aopalliance-1.0.jar:/home/sam/.ivy2/cache/com.affinytix.model/affinytix-model-msg_2.10/jars/affinytix-model-msg_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.google.guava/guava/bundles/guava-19.0.jar:/home/sam/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.2.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.1.jar:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-reflect.jar:/home/sam/.ivy2/cache/org.scalatest/scalatest_2.10/bundles/scalatest_2.10-2.2.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar:/home/sam/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/home/sam/.ivy2/cache/com.affinytix.exception/affinytix-exception_2.10/jars/affinytix-exception_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.affinytix.util/affinytix-util_2.10/jars/affinytix-util_2.10-1.1.0.jar:/home/sam/.ivy2/cache/com.affinytix.stream/affinytix-stream-serializer_2.10/jars/affinytix-stream-serializer_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.github.stephenc.findbugs/findbugs-annotations/jars/findbugs-annotations-1.3.9-1.jar:/home/sam/.ivy2/cache/com.thoughtworks.paranamer/paranamer/bundles/paranamer-2.7.jar:/home/sam/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.2.jar:/home/sam/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.9.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.1.jar:/home/sam/.ivy2/cache/commons-httpclient/commons-httpclient/jars/commons-httpclient-3.1.jar:/home/sam/.ivy2/cache/commons-lang/commons-lang/jars/commons-lang-2.6.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.5.13.Final.jar:/home/sam/.ivy2/cache/joda-time/joda-time/jars/joda-time-2.7.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.7.jar:/home/sam/.ivy2/cache/org.apache.avro/avro/jars/avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-compiler/bundles/avro-compiler-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-ipc/jars/avro-ipc-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-mapred/jars/avro-mapred-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-tools/jars/avro-tools-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-compress/jars/commons-compress-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.velocity/velocity/jars/velocity-1.7.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-core-asl/jars/jackson-core-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/jars/jackson-mapper-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty/jars/jetty-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty-util/jars/jetty-util-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211.jar:/home/sam/.ivy2/cache/org.tukaani/xz/jars/xz-1.5.jar:/home/sam/.ivy2/cache/com.101tec/zkclient/jars/zkclient-0.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-kafka/jars/hadoop-mini-clusters-kafka-0.1.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-zookeeper/jars/hadoop-mini-clusters-zookeeper-0.1.7.jar:/home/sam/.ivy2/cache/com.yammer.metrics/metrics-core/jars/metrics-core-2.2.0.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.9.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math/jars/commons-math-2.2.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.5.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka_2.10/jars/kafka_2.10-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.javassist/javassist/bundles/javassist-3.18.1-GA.jar:/home/sam/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.2.jar:/opt/idea-IC-162.1121.32/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner -s TestSendingReceiving -testName "A receiver should be able to consume on on topic with one partition" -showProgressMessages true -C org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestReporter
Testing started at 5:38 AM ...
2016-08-02 05:38:05 INFO ZookeeperLocalCluster:174 - ZOOKEEPER: Starting Zookeeper on port: 23456
2016-08-02 05:38:06 INFO ZooKeeperServerMain:95 - Starting server
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:zookeeper.version=3.4.6-258--1, built on 04/25/2016 05:22 GMT
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:host.name=sam-dell
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.version=1.8.0_101
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.vendor=Oracle Corporation
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.home=/usr/lib/jvm/java-8-oracle/jre
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.class.path=/home/sam/.IdeaIC2016.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/test-classes:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/classes:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-library.jar:/home/sam/.ivy2/cache/xml-apis/xml-apis/jars/xml-apis-1.3.04.jar:/home/sam/.ivy2/cache/xmlenc/xmlenc/jars/xmlenc-0.52.jar:/home/sam/.ivy2/cache/xerces/xercesImpl/jars/xercesImpl-2.9.1.jar:/home/sam/.ivy2/cache/org.sonatype.sisu.inject/cglib/jars/cglib-2.2.1-v20090111.jar:/home/sam/.ivy2/cache/org.fusesource.leveldbjni/leveldbjni-all/bundles/leveldbjni-all-1.8.jar:/home/sam/.ivy2/cache/org.codehaus.jettison/jettison/bundles/jettison-1.1.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-xc/jars/jackson-xc-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-jaxrs/jars/jackson-jaxrs-1.9.13.jar:/home/sam/.ivy2/cache/org.apache.zookeeper/zookeeper/jars/zookeeper-3.4.6.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpcore/jars/httpcore-4.2.4.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.2.5.jar:/home/sam/.ivy2/cache/org.apache.htrace/htrace-core/jars/htrace-core-3.1.0-incubating.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-server-common/jars/hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-registry/jars/hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-common/jars/hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-client/jars/hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/jars/hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/jars/hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-jobclient/jars/hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-core/jars/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-common/jars/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-app/jars/hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/jars/hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-client/jars/hadoop-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-annotations/jars/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-kerberos-codec/bundles/apacheds-kerberos-codec-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-i18n/bundles/apacheds-i18n-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-util/bundles/api-util-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-asn1-api/bundles/api-asn1-api-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-recipes/bundles/curator-recipes-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-framework/bundles/curator-framework-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-client/bundles/curator-client-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.1.1.jar:/home/sam/.ivy2/cache/log4j/log4j/bundles/log4j-1.2.17.jar:/home/sam/.ivy2/cache/junit/junit/jars/junit-3.8.1.jar:/home/sam/.ivy2/cache/javax.xml.stream/stax-api/jars/stax-api-1.0-2.jar:/home/sam/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.2.2.jar:/home/sam/.ivy2/cache/javax.servlet.jsp/jsp-api/jars/jsp-api-2.1.jar:/home/sam/.ivy2/cache/javax.servlet/servlet-api/jars/servlet-api-2.5.jar:/home/sam/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:/home/sam/.ivy2/cache/javax.activation/activation/jars/activation-1.1.jar:/home/sam/.ivy2/cache/io.netty/netty-all/jars/netty-all-4.0.23.Final.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.7.0.Final.jar:/home/sam/.ivy2/cache/commons-net/commons-net/jars/commons-net-3.1.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.3.jar:/home/sam/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar:/home/sam/.ivy2/cache/commons-digester/commons-digester/jars/commons-digester-1.8.jar:/home/sam/.ivy2/cache/commons-configuration/commons-configuration/jars/commons-configuration-1.6.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.2.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils-core/jars/commons-beanutils-core-1.8.0.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils/jars/commons-beanutils-1.7.0.jar:/home/sam/.ivy2/cache/com.sun.xml.bind/jaxb-impl/jars/jaxb-impl-2.2.3-1.jar:/home/sam/.ivy2/cache/com.sun.jersey.contribs/jersey-guice/jars/jersey-guice-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-server/bundles/jersey-server-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-json/bundles/jersey-json-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-core/bundles/jersey-core-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-client/bundles/jersey-client-1.9.jar:/home/sam/.ivy2/cache/com.squareup.okio/okio/jars/okio-1.4.0.jar:/home/sam/.ivy2/cache/com.squareup.okhttp/okhttp/jars/okhttp-2.4.0.jar:/home/sam/.ivy2/cache/com.microsoft.windowsazure.storage/microsoft-windowsazure-storage-sdk/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/home/sam/.ivy2/cache/com.jamesmurty.utils/java-xmlbuilder/jars/java-xmlbuilder-0.4.jar:/home/sam/.ivy2/cache/com.google.protobuf/protobuf-java/bundles/protobuf-java-2.5.0.jar:/home/sam/.ivy2/cache/com.google.inject/guice/jars/guice-3.0.jar:/home/sam/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.2.4.jar:/home/sam/.ivy2/cache/com.google.code.findbugs/jsr305/jars/jsr305-3.0.0.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-common/jars/hadoop-mini-clusters-common-0.1.7.jar:/home/sam/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/jars/jackson-core-2.2.3.jar:/home/sam/.ivy2/cache/asm/asm/jars/asm-3.1.jar:/home/sam/.ivy2/cache/aopalliance/aopalliance/jars/aopalliance-1.0.jar:/home/sam/.ivy2/cache/com.affinytix.model/affinytix-model-msg_2.10/jars/affinytix-model-msg_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.google.guava/guava/bundles/guava-19.0.jar:/home/sam/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.2.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.1.jar:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-reflect.jar:/home/sam/.ivy2/cache/org.scalatest/scalatest_2.10/bundles/scalatest_2.10-2.2.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar:/home/sam/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/home/sam/.ivy2/cache/com.affinytix.exception/affinytix-exception_2.10/jars/affinytix-exception_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.affinytix.util/affinytix-util_2.10/jars/affinytix-util_2.10-1.1.0.jar:/home/sam/.ivy2/cache/com.affinytix.stream/affinytix-stream-serializer_2.10/jars/affinytix-stream-serializer_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.github.stephenc.findbugs/findbugs-annotations/jars/findbugs-annotations-1.3.9-1.jar:/home/sam/.ivy2/cache/com.thoughtworks.paranamer/paranamer/bundles/paranamer-2.7.jar:/home/sam/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.2.jar:/home/sam/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.9.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.1.jar:/home/sam/.ivy2/cache/commons-httpclient/commons-httpclient/jars/commons-httpclient-3.1.jar:/home/sam/.ivy2/cache/commons-lang/commons-lang/jars/commons-lang-2.6.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.5.13.Final.jar:/home/sam/.ivy2/cache/joda-time/joda-time/jars/joda-time-2.7.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.7.jar:/home/sam/.ivy2/cache/org.apache.avro/avro/jars/avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-compiler/bundles/avro-compiler-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-ipc/jars/avro-ipc-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-mapred/jars/avro-mapred-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-tools/jars/avro-tools-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-compress/jars/commons-compress-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.velocity/velocity/jars/velocity-1.7.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-core-asl/jars/jackson-core-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/jars/jackson-mapper-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty/jars/jetty-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty-util/jars/jetty-util-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211.jar:/home/sam/.ivy2/cache/org.tukaani/xz/jars/xz-1.5.jar:/home/sam/.ivy2/cache/com.101tec/zkclient/jars/zkclient-0.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-kafka/jars/hadoop-mini-clusters-kafka-0.1.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-zookeeper/jars/hadoop-mini-clusters-zookeeper-0.1.7.jar:/home/sam/.ivy2/cache/com.yammer.metrics/metrics-core/jars/metrics-core-2.2.0.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.9.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math/jars/commons-math-2.2.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.5.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka_2.10/jars/kafka_2.10-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.javassist/javassist/bundles/javassist-3.18.1-GA.jar:/home/sam/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.2.jar:/opt/idea-IC-162.1121.32/lib/idea_rt.jar
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.library.path=/opt/idea-IC-162.1121.32/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.io.tmpdir=/tmp
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:java.compiler=
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:os.name=Linux
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:os.arch=amd64
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:os.version=4.4.0-31-generic
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:user.name=sam
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:user.home=/home/sam
2016-08-02 05:38:06 INFO ZooKeeperServer:100 - Server environment:user.dir=/home/sam/IdeaProjects/affinytix-stream-kafka
2016-08-02 05:38:06 INFO ZooKeeperServer:755 - tickTime set to 2000
2016-08-02 05:38:06 INFO ZooKeeperServer:764 - minSessionTimeout set to -1
2016-08-02 05:38:06 INFO ZooKeeperServer:773 - maxSessionTimeout set to -1
2016-08-02 05:38:06 INFO NIOServerCnxnFactory:94 - binding to port 0.0.0.0/0.0.0.0:23456
2016-08-02 05:38:07 INFO KafkaLocalBroker:158 - KAFKA: Starting Kafka on port: 11111
2016-08-02 05:38:07 INFO KafkaConfig:165 - KafkaConfig values:
advertised.host.name = localhost
metric.reporters = []
quota.producer.default = 9223372036854775807
offsets.topic.num.partitions = 50
log.flush.interval.messages = 9223372036854775807
auto.create.topics.enable = true
controller.socket.timeout.ms = 30000
log.flush.interval.ms = null
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
replica.socket.receive.buffer.bytes = 65536
min.insync.replicas = 1
replica.fetch.wait.max.ms = 500
num.recovery.threads.per.data.dir = 1
ssl.keystore.type = JKS
default.replication.factor = 1
ssl.truststore.password = null
log.preallocate = false
sasl.kerberos.principal.to.local.rules = [DEFAULT]
fetch.purgatory.purge.interval.requests = 1000
ssl.endpoint.identification.algorithm = null
replica.socket.timeout.ms = 30000
message.max.bytes = 1000012
num.io.threads = 8
offsets.commit.required.acks = -1
log.flush.offset.checkpoint.interval.ms = 60000
delete.topic.enable = false
quota.window.size.seconds = 1
ssl.truststore.type = JKS
offsets.commit.timeout.ms = 5000
quota.window.num = 11
zookeeper.connect = localhost:23456
authorizer.class.name =
num.replica.fetchers = 1
log.retention.ms = null
log.roll.jitter.hours = 0
log.cleaner.enable = true
offsets.load.buffer.size = 5242880
log.cleaner.delete.retention.ms = 86400000
ssl.client.auth = none
controlled.shutdown.max.retries = 3
queued.max.requests = 500
offsets.topic.replication.factor = 3
log.cleaner.threads = 1
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
socket.request.max.bytes = 104857600
ssl.trustmanager.algorithm = PKIX
zookeeper.session.timeout.ms = 6000
log.retention.bytes = -1
sasl.kerberos.min.time.before.relogin = 60000
zookeeper.set.acl = false
connections.max.idle.ms = 600000
offsets.retention.minutes = 1440
replica.fetch.backoff.ms = 1000
inter.broker.protocol.version = 0.9.0.X
log.retention.hours = 168
num.partitions = 1
broker.id.generation.enable = true
listeners = null
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
log.roll.ms = null
log.flush.scheduler.interval.ms = 9223372036854775807
ssl.cipher.suites = null
log.index.size.max.bytes = 10485760
ssl.keymanager.algorithm = SunX509
security.inter.broker.protocol = PLAINTEXT
replica.fetch.max.bytes = 1048576
advertised.port = null
log.cleaner.dedupe.buffer.size = 134217728
replica.high.watermark.checkpoint.interval.ms = 5000
log.cleaner.io.buffer.size = 524288
sasl.kerberos.ticket.renew.window.factor = 0.8
zookeeper.connection.timeout.ms = null
controlled.shutdown.retry.backoff.ms = 5000
log.roll.hours = 168
log.cleanup.policy = delete
host.name =
log.roll.jitter.ms = null
max.connections.per.ip = 2147483647
offsets.topic.segment.bytes = 104857600
background.threads = 10
quota.consumer.default = 9223372036854775807
request.timeout.ms = 30000
log.index.interval.bytes = 4096
log.dir = embedded_kafka
log.segment.bytes = 1073741824
log.cleaner.backoff.ms = 15000
offset.metadata.max.bytes = 4096
ssl.truststore.location = null
group.max.session.timeout.ms = 30000
ssl.keystore.password = null
zookeeper.sync.time.ms = 2000
port = 11111
log.retention.minutes = null
log.segment.delete.delay.ms = 60000
log.dirs = null
controlled.shutdown.enable = true
compression.type = producer
max.connections.per.ip.overrides =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
auto.leader.rebalance.enable = true
leader.imbalance.check.interval.seconds = 300
log.cleaner.min.cleanable.ratio = 0.5
replica.lag.time.max.ms = 10000
num.network.threads = 3
ssl.key.password = null
reserved.broker.max.id = 1000
metrics.num.samples = 2
socket.send.buffer.bytes = 102400
ssl.protocol = TLS
socket.receive.buffer.bytes = 102400
ssl.keystore.location = null
replica.fetch.min.bytes = 1
unclean.leader.election.enable = true
group.min.session.timeout.ms = 6000
log.cleaner.io.buffer.load.factor = 0.9
offsets.retention.check.interval.ms = 600000
producer.purgatory.purge.interval.requests = 1000
metrics.sample.window.ms = 30000
broker.id = 0
offsets.topic.compression.codec = 0
log.retention.check.interval.ms = 300000
advertised.listeners = null
leader.imbalance.per.broker.percentage = 10

2016-08-02 05:38:07 INFO KafkaServer:68 - starting
2016-08-02 05:38:07 INFO KafkaServer:68 - Connecting to zookeeper on localhost:23456
2016-08-02 05:38:07 INFO ZkEventThread:64 - Starting ZkClient event thread.
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:zookeeper.version=3.4.6-258--1, built on 04/25/2016 05:22 GMT
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:host.name=sam-dell
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.version=1.8.0_101
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.vendor=Oracle Corporation
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.class.path=/home/sam/.IdeaIC2016.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/test-classes:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/classes:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-library.jar:/home/sam/.ivy2/cache/xml-apis/xml-apis/jars/xml-apis-1.3.04.jar:/home/sam/.ivy2/cache/xmlenc/xmlenc/jars/xmlenc-0.52.jar:/home/sam/.ivy2/cache/xerces/xercesImpl/jars/xercesImpl-2.9.1.jar:/home/sam/.ivy2/cache/org.sonatype.sisu.inject/cglib/jars/cglib-2.2.1-v20090111.jar:/home/sam/.ivy2/cache/org.fusesource.leveldbjni/leveldbjni-all/bundles/leveldbjni-all-1.8.jar:/home/sam/.ivy2/cache/org.codehaus.jettison/jettison/bundles/jettison-1.1.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-xc/jars/jackson-xc-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-jaxrs/jars/jackson-jaxrs-1.9.13.jar:/home/sam/.ivy2/cache/org.apache.zookeeper/zookeeper/jars/zookeeper-3.4.6.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpcore/jars/httpcore-4.2.4.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.2.5.jar:/home/sam/.ivy2/cache/org.apache.htrace/htrace-core/jars/htrace-core-3.1.0-incubating.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-server-common/jars/hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-registry/jars/hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-common/jars/hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-client/jars/hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/jars/hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/jars/hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-jobclient/jars/hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-core/jars/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-common/jars/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-app/jars/hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/jars/hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-client/jars/hadoop-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-annotations/jars/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-kerberos-codec/bundles/apacheds-kerberos-codec-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-i18n/bundles/apacheds-i18n-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-util/bundles/api-util-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-asn1-api/bundles/api-asn1-api-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-recipes/bundles/curator-recipes-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-framework/bundles/curator-framework-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-client/bundles/curator-client-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.1.1.jar:/home/sam/.ivy2/cache/log4j/log4j/bundles/log4j-1.2.17.jar:/home/sam/.ivy2/cache/junit/junit/jars/junit-3.8.1.jar:/home/sam/.ivy2/cache/javax.xml.stream/stax-api/jars/stax-api-1.0-2.jar:/home/sam/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.2.2.jar:/home/sam/.ivy2/cache/javax.servlet.jsp/jsp-api/jars/jsp-api-2.1.jar:/home/sam/.ivy2/cache/javax.servlet/servlet-api/jars/servlet-api-2.5.jar:/home/sam/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:/home/sam/.ivy2/cache/javax.activation/activation/jars/activation-1.1.jar:/home/sam/.ivy2/cache/io.netty/netty-all/jars/netty-all-4.0.23.Final.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.7.0.Final.jar:/home/sam/.ivy2/cache/commons-net/commons-net/jars/commons-net-3.1.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.3.jar:/home/sam/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar:/home/sam/.ivy2/cache/commons-digester/commons-digester/jars/commons-digester-1.8.jar:/home/sam/.ivy2/cache/commons-configuration/commons-configuration/jars/commons-configuration-1.6.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.2.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils-core/jars/commons-beanutils-core-1.8.0.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils/jars/commons-beanutils-1.7.0.jar:/home/sam/.ivy2/cache/com.sun.xml.bind/jaxb-impl/jars/jaxb-impl-2.2.3-1.jar:/home/sam/.ivy2/cache/com.sun.jersey.contribs/jersey-guice/jars/jersey-guice-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-server/bundles/jersey-server-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-json/bundles/jersey-json-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-core/bundles/jersey-core-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-client/bundles/jersey-client-1.9.jar:/home/sam/.ivy2/cache/com.squareup.okio/okio/jars/okio-1.4.0.jar:/home/sam/.ivy2/cache/com.squareup.okhttp/okhttp/jars/okhttp-2.4.0.jar:/home/sam/.ivy2/cache/com.microsoft.windowsazure.storage/microsoft-windowsazure-storage-sdk/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/home/sam/.ivy2/cache/com.jamesmurty.utils/java-xmlbuilder/jars/java-xmlbuilder-0.4.jar:/home/sam/.ivy2/cache/com.google.protobuf/protobuf-java/bundles/protobuf-java-2.5.0.jar:/home/sam/.ivy2/cache/com.google.inject/guice/jars/guice-3.0.jar:/home/sam/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.2.4.jar:/home/sam/.ivy2/cache/com.google.code.findbugs/jsr305/jars/jsr305-3.0.0.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-common/jars/hadoop-mini-clusters-common-0.1.7.jar:/home/sam/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/jars/jackson-core-2.2.3.jar:/home/sam/.ivy2/cache/asm/asm/jars/asm-3.1.jar:/home/sam/.ivy2/cache/aopalliance/aopalliance/jars/aopalliance-1.0.jar:/home/sam/.ivy2/cache/com.affinytix.model/affinytix-model-msg_2.10/jars/affinytix-model-msg_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.google.guava/guava/bundles/guava-19.0.jar:/home/sam/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.2.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.1.jar:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-reflect.jar:/home/sam/.ivy2/cache/org.scalatest/scalatest_2.10/bundles/scalatest_2.10-2.2.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar:/home/sam/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/home/sam/.ivy2/cache/com.affinytix.exception/affinytix-exception_2.10/jars/affinytix-exception_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.affinytix.util/affinytix-util_2.10/jars/affinytix-util_2.10-1.1.0.jar:/home/sam/.ivy2/cache/com.affinytix.stream/affinytix-stream-serializer_2.10/jars/affinytix-stream-serializer_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.github.stephenc.findbugs/findbugs-annotations/jars/findbugs-annotations-1.3.9-1.jar:/home/sam/.ivy2/cache/com.thoughtworks.paranamer/paranamer/bundles/paranamer-2.7.jar:/home/sam/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.2.jar:/home/sam/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.9.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.1.jar:/home/sam/.ivy2/cache/commons-httpclient/commons-httpclient/jars/commons-httpclient-3.1.jar:/home/sam/.ivy2/cache/commons-lang/commons-lang/jars/commons-lang-2.6.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.5.13.Final.jar:/home/sam/.ivy2/cache/joda-time/joda-time/jars/joda-time-2.7.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.7.jar:/home/sam/.ivy2/cache/org.apache.avro/avro/jars/avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-compiler/bundles/avro-compiler-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-ipc/jars/avro-ipc-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-mapred/jars/avro-mapred-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-tools/jars/avro-tools-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-compress/jars/commons-compress-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.velocity/velocity/jars/velocity-1.7.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-core-asl/jars/jackson-core-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/jars/jackson-mapper-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty/jars/jetty-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty-util/jars/jetty-util-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211.jar:/home/sam/.ivy2/cache/org.tukaani/xz/jars/xz-1.5.jar:/home/sam/.ivy2/cache/com.101tec/zkclient/jars/zkclient-0.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-kafka/jars/hadoop-mini-clusters-kafka-0.1.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-zookeeper/jars/hadoop-mini-clusters-zookeeper-0.1.7.jar:/home/sam/.ivy2/cache/com.yammer.metrics/metrics-core/jars/metrics-core-2.2.0.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.9.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math/jars/commons-math-2.2.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.5.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka_2.10/jars/kafka_2.10-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.javassist/javassist/bundles/javassist-3.18.1-GA.jar:/home/sam/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.2.jar:/opt/idea-IC-162.1121.32/lib/idea_rt.jar
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.library.path=/opt/idea-IC-162.1121.32/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.io.tmpdir=/tmp
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:java.compiler=
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:os.name=Linux
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:os.arch=amd64
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:os.version=4.4.0-31-generic
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:user.name=sam
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:user.home=/home/sam
2016-08-02 05:38:07 INFO ZooKeeper:100 - Client environment:user.dir=/home/sam/IdeaProjects/affinytix-stream-kafka
2016-08-02 05:38:07 INFO ZooKeeper:438 - Initiating client connection, connectString=localhost:23456 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@710c2b53
2016-08-02 05:38:07 INFO ZkClient:934 - Waiting for keeper state SyncConnected
2016-08-02 05:38:07 INFO ClientCnxn:1019 - Opening socket connection to server localhost/127.0.0.1:23456. Will not attempt to authenticate using SASL (unknown error)
2016-08-02 05:38:07 INFO ClientCnxn:864 - Socket connection established to localhost/127.0.0.1:23456, initiating session
2016-08-02 05:38:07 INFO NIOServerCnxnFactory:197 - Accepted socket connection from /127.0.0.1:37628
2016-08-02 05:38:07 INFO ZooKeeperServer:868 - Client attempting to establish new session at /127.0.0.1:37628
2016-08-02 05:38:07 INFO FileTxnLog:199 - Creating new log file: log.1
2016-08-02 05:38:07 INFO ClientCnxn:1279 - Session establishment complete on server localhost/127.0.0.1:23456, sessionid = 0x156491d83840000, negotiated timeout = 6000
2016-08-02 05:38:07 INFO ZooKeeperServer:617 - Established session 0x156491d83840000 with negotiated timeout 6000 for client /127.0.0.1:37628
2016-08-02 05:38:07 INFO ZkClient:711 - zookeeper state changed (SyncConnected)
2016-08-02 05:38:07 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x5 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
2016-08-02 05:38:07 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xb zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
2016-08-02 05:38:08 INFO LogManager:68 - Log directory '/home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka' not found, creating it.
2016-08-02 05:38:08 INFO LogManager:68 - Loading logs.
2016-08-02 05:38:08 INFO LogManager:68 - Logs loading complete.
2016-08-02 05:38:08 INFO LogManager:68 - Starting log cleanup with a period of 300000 ms.
2016-08-02 05:38:08 INFO LogManager:68 - Starting log flusher with a default period of 9223372036854775807 ms.
2016-08-02 05:38:08 INFO LogCleaner:68 - Starting the log cleaner
2016-08-02 05:38:08 INFO LogCleaner:68 - [kafka-log-cleaner-thread-0], Starting
2016-08-02 05:38:08 WARN BrokerMetadataCheckpoint:83 - No meta.properties file under dir /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/meta.properties
2016-08-02 05:38:08 INFO Acceptor:68 - Awaiting socket connections on 0.0.0.0:11111.
2016-08-02 05:38:08 INFO SocketServer:68 - [Socket Server on Broker 0], Started 1 acceptor threads
2016-08-02 05:38:08 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Starting
2016-08-02 05:38:08 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Starting
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Controller starting up
2016-08-02 05:38:08 INFO ZKCheckedEphemeral:68 - Creating /controller (is it secure? false)
2016-08-02 05:38:08 INFO ZKCheckedEphemeral:68 - Result of znode creation is: OK
2016-08-02 05:38:08 INFO ZookeeperLeaderElector:68 - 0 successfully elected as leader
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Broker 0 starting become controller state transition
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:setData cxid:0x21 zxid:0x12 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Controller 0 incremented epoch to 1
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Partitions undergoing preferred replica election:
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Partitions that completed preferred replica election:
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Resuming preferred replica election for partitions:
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Partitions being reassigned: Map()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Partitions already reassigned: List()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Resuming reassignment of partitions: Map()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: List of topics to be deleted:
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: List of topics ineligible for deletion:
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Currently active brokers in the cluster: Set()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Currently shutting brokers in the cluster: Set()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Current list of topics in the cluster: Set()
2016-08-02 05:38:08 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Started replica state machine with initial state -> Map()
2016-08-02 05:38:08 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Started partition state machine with initial state -> Map()
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Broker 0 is ready to serve as the new controller with epoch 1
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Starting preferred replica leader election for partitions
2016-08-02 05:38:08 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:delete cxid:0x30 zxid:0x14 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: starting the partition rebalance scheduler
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: Controller startup complete
2016-08-02 05:38:08 INFO GroupCoordinator:68 - [GroupCoordinator 0]: Starting up.
2016-08-02 05:38:08 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Starting
2016-08-02 05:38:08 INFO GroupCoordinator:68 - [GroupCoordinator 0]: Startup complete.
2016-08-02 05:38:08 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 45 milliseconds.
2016-08-02 05:38:08 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Starting
2016-08-02 05:38:08 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Produce], Starting
2016-08-02 05:38:08 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Fetch], Starting
2016-08-02 05:38:08 INFO Mx4jLoader$:68 - Will not load MX4J, mx4j-tools.jar is not in the classpath
2016-08-02 05:38:08 INFO ZKCheckedEphemeral:68 - Creating /brokers/ids/0 (is it secure? false)
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x37 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x38 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
2016-08-02 05:38:08 INFO ZookeeperLeaderElector$LeaderChangeListener:68 - New leader is 0
2016-08-02 05:38:08 INFO ZKCheckedEphemeral:68 - Result of znode creation is: OK
2016-08-02 05:38:08 INFO ZkUtils:68 - Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(localhost,11111,PLAINTEXT)
2016-08-02 05:38:08 INFO ReplicaStateMachine$BrokerChangeListener:68 - [BrokerChangeListener on Controller 0]: Broker change listener fired for path /brokers/ids with children 0
2016-08-02 05:38:08 INFO AppInfoParser:82 - Kafka version : 0.9.0.1
2016-08-02 05:38:08 INFO AppInfoParser:83 - Kafka commitId : 23c69d62a0cabf06
2016-08-02 05:38:08 INFO KafkaServer:68 - [Kafka Server 0], started
2016-08-02 05:38:08 INFO ReplicaStateMachine$BrokerChangeListener:68 - [BrokerChangeListener on Controller 0]: Newly added brokers: 0, deleted brokers: , all live brokers: 0
2016-08-02 05:38:08 INFO RequestSendThread:68 - [kafka-mini-cluster:Controller-0-to-broker-0-send-thread], Starting
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: New broker startup callback for 0
2016-08-02 05:38:08 INFO RequestSendThread:68 - [kafka-mini-cluster:Controller-0-to-broker-0-send-thread], Controller 0 connected to Node(0, localhost, 11111) for sending state change requests
2016-08-02 05:38:08 INFO ConfigLoader:99 - Load conf at : kafka-producer-test-string.json
2016-08-02 05:38:08 INFO ConfigLoader:100 - Config(SimpleConfigObject({"awt":{"toolkit":"sun.awt.X11.XToolkit"},"file":{"encoding":{"pkg":"sun.io"},"separator":"/"},"idea":{"launcher":{"bin":{"path":"/opt/idea-IC-162.1121.32/bin"},"port":"7532"}},"java":{"awt":{"graphicsenv":"sun.awt.X11GraphicsEnvironment","printerjob":"sun.print.PSPrinterJob"},"class":{"path":"/home/sam/.IdeaIC2016.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/test-classes:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/classes:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-library.jar:/home/sam/.ivy2/cache/xml-apis/xml-apis/jars/xml-apis-1.3.04.jar:/home/sam/.ivy2/cache/xmlenc/xmlenc/jars/xmlenc-0.52.jar:/home/sam/.ivy2/cache/xerces/xercesImpl/jars/xercesImpl-2.9.1.jar:/home/sam/.ivy2/cache/org.sonatype.sisu.inject/cglib/jars/cglib-2.2.1-v20090111.jar:/home/sam/.ivy2/cache/org.fusesource.leveldbjni/leveldbjni-all/bundles/leveldbjni-all-1.8.jar:/home/sam/.ivy2/cache/org.codehaus.jettison/jettison/bundles/jettison-1.1.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-xc/jars/jackson-xc-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-jaxrs/jars/jackson-jaxrs-1.9.13.jar:/home/sam/.ivy2/cache/org.apache.zookeeper/zookeeper/jars/zookeeper-3.4.6.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpcore/jars/httpcore-4.2.4.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.2.5.jar:/home/sam/.ivy2/cache/org.apache.htrace/htrace-core/jars/htrace-core-3.1.0-incubating.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-server-common/jars/hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-registry/jars/hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-common/jars/hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-client/jars/hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/jars/hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/jars/hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-jobclient/jars/hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-core/jars/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-common/jars/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-app/jars/hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/jars/hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-client/jars/hadoop-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-annotations/jars/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-kerberos-codec/bundles/apacheds-kerberos-codec-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-i18n/bundles/apacheds-i18n-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-util/bundles/api-util-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-asn1-api/bundles/api-asn1-api-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-recipes/bundles/curator-recipes-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-framework/bundles/curator-framework-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-client/bundles/curator-client-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.1.1.jar:/home/sam/.ivy2/cache/log4j/log4j/bundles/log4j-1.2.17.jar:/home/sam/.ivy2/cache/junit/junit/jars/junit-3.8.1.jar:/home/sam/.ivy2/cache/javax.xml.stream/stax-api/jars/stax-api-1.0-2.jar:/home/sam/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.2.2.jar:/home/sam/.ivy2/cache/javax.servlet.jsp/jsp-api/jars/jsp-api-2.1.jar:/home/sam/.ivy2/cache/javax.servlet/servlet-api/jars/servlet-api-2.5.jar:/home/sam/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:/home/sam/.ivy2/cache/javax.activation/activation/jars/activation-1.1.jar:/home/sam/.ivy2/cache/io.netty/netty-all/jars/netty-all-4.0.23.Final.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.7.0.Final.jar:/home/sam/.ivy2/cache/commons-net/commons-net/jars/commons-net-3.1.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.3.jar:/home/sam/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar:/home/sam/.ivy2/cache/commons-digester/commons-digester/jars/commons-digester-1.8.jar:/home/sam/.ivy2/cache/commons-configuration/commons-configuration/jars/commons-configuration-1.6.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.2.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils-core/jars/commons-beanutils-core-1.8.0.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils/jars/commons-beanutils-1.7.0.jar:/home/sam/.ivy2/cache/com.sun.xml.bind/jaxb-impl/jars/jaxb-impl-2.2.3-1.jar:/home/sam/.ivy2/cache/com.sun.jersey.contribs/jersey-guice/jars/jersey-guice-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-server/bundles/jersey-server-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-json/bundles/jersey-json-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-core/bundles/jersey-core-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-client/bundles/jersey-client-1.9.jar:/home/sam/.ivy2/cache/com.squareup.okio/okio/jars/okio-1.4.0.jar:/home/sam/.ivy2/cache/com.squareup.okhttp/okhttp/jars/okhttp-2.4.0.jar:/home/sam/.ivy2/cache/com.microsoft.windowsazure.storage/microsoft-windowsazure-storage-sdk/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/home/sam/.ivy2/cache/com.jamesmurty.utils/java-xmlbuilder/jars/java-xmlbuilder-0.4.jar:/home/sam/.ivy2/cache/com.google.protobuf/protobuf-java/bundles/protobuf-java-2.5.0.jar:/home/sam/.ivy2/cache/com.google.inject/guice/jars/guice-3.0.jar:/home/sam/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.2.4.jar:/home/sam/.ivy2/cache/com.google.code.findbugs/jsr305/jars/jsr305-3.0.0.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-common/jars/hadoop-mini-clusters-common-0.1.7.jar:/home/sam/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/jars/jackson-core-2.2.3.jar:/home/sam/.ivy2/cache/asm/asm/jars/asm-3.1.jar:/home/sam/.ivy2/cache/aopalliance/aopalliance/jars/aopalliance-1.0.jar:/home/sam/.ivy2/cache/com.affinytix.model/affinytix-model-msg_2.10/jars/affinytix-model-msg_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.google.guava/guava/bundles/guava-19.0.jar:/home/sam/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.2.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.1.jar:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-reflect.jar:/home/sam/.ivy2/cache/org.scalatest/scalatest_2.10/bundles/scalatest_2.10-2.2.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar:/home/sam/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/home/sam/.ivy2/cache/com.affinytix.exception/affinytix-exception_2.10/jars/affinytix-exception_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.affinytix.util/affinytix-util_2.10/jars/affinytix-util_2.10-1.1.0.jar:/home/sam/.ivy2/cache/com.affinytix.stream/affinytix-stream-serializer_2.10/jars/affinytix-stream-serializer_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.github.stephenc.findbugs/findbugs-annotations/jars/findbugs-annotations-1.3.9-1.jar:/home/sam/.ivy2/cache/com.thoughtworks.paranamer/paranamer/bundles/paranamer-2.7.jar:/home/sam/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.2.jar:/home/sam/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.9.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.1.jar:/home/sam/.ivy2/cache/commons-httpclient/commons-httpclient/jars/commons-httpclient-3.1.jar:/home/sam/.ivy2/cache/commons-lang/commons-lang/jars/commons-lang-2.6.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.5.13.Final.jar:/home/sam/.ivy2/cache/joda-time/joda-time/jars/joda-time-2.7.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.7.jar:/home/sam/.ivy2/cache/org.apache.avro/avro/jars/avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-compiler/bundles/avro-compiler-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-ipc/jars/avro-ipc-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-mapred/jars/avro-mapred-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-tools/jars/avro-tools-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-compress/jars/commons-compress-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.velocity/velocity/jars/velocity-1.7.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-core-asl/jars/jackson-core-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/jars/jackson-mapper-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty/jars/jetty-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty-util/jars/jetty-util-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211.jar:/home/sam/.ivy2/cache/org.tukaani/xz/jars/xz-1.5.jar:/home/sam/.ivy2/cache/com.101tec/zkclient/jars/zkclient-0.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-kafka/jars/hadoop-mini-clusters-kafka-0.1.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-zookeeper/jars/hadoop-mini-clusters-zookeeper-0.1.7.jar:/home/sam/.ivy2/cache/com.yammer.metrics/metrics-core/jars/metrics-core-2.2.0.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.9.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math/jars/commons-math-2.2.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.5.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka_2.10/jars/kafka_2.10-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.javassist/javassist/bundles/javassist-3.18.1-GA.jar:/home/sam/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.2.jar:/opt/idea-IC-162.1121.32/lib/idea_rt.jar","version":"52.0"},"endorsed":{"dirs":"/usr/lib/jvm/java-8-oracle/jre/lib/endorsed"},"ext":{"dirs":"/usr/lib/jvm/java-8-oracle/jre/lib/ext:/usr/java/packages/lib/ext"},"home":"/usr/lib/jvm/java-8-oracle/jre","io":{"tmpdir":"/tmp"},"library":{"path":"/opt/idea-IC-162.1121.32/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"},"runtime":{"name":"Java(TM) SE Runtime Environment","version":"1.8.0_101-b13"},"specification":{"name":"Java Platform API Specification","vendor":"Oracle Corporation","version":"1.8"},"vendor":{"url":{"bug":"http://bugreport.sun.com/bugreport/"}},"version":"1.8.0_101","vm":{"info":"mixed mode","name":"Java HotSpot(TM) 64-Bit Server VM","specification":{"name":"Java Virtual Machine Specification","vendor":"Oracle Corporation","version":"1.8"},"vendor":"Oracle Corporation","version":"25.101-b13"}},"kafka-producers":[{"acks":"all","batch_size":16384,"bootstrap_servers":["localhost:11111"],"buffer_memory":33554432,"client_id":"testing-client","compression_type":"none","key_serializer":"org.apache.kafka.common.serialization.StringSerializer","linger_ms":1,"name":"testing","retries":0,"topic":"test","value_serializer":"org.apache.kafka.common.serialization.StringSerializer"}],"line":{"separator":"\n"},"os":{"arch":"amd64","name":"Linux","version":"4.4.0-31-generic"},"path":{"separator":":"},"sun":{"arch":{"data":{"model":"64"}},"boot":{"class":{"path":"/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/classes"},"library":{"path":"/usr/lib/jvm/java-8-oracle/jre/lib/amd64"}},"cpu":{"endian":"little","isalist":""},"desktop":"gnome","io":{"unicode":{"encoding":"UnicodeLittle"}},"java":{"command":"com.intellij.rt.execution.application.AppMain org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner -s TestSendingReceiving -testName A receiver should be able to consume on on topic with one partition -showProgressMessages true -C org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestReporter","launcher":"SUN_STANDARD"},"jnu":{"encoding":"UTF-8"},"management":{"compiler":"HotSpot 64-Bit Tiered Compilers"},"os":{"patch":{"level":"unknown"}}},"user":{"country":"US","dir":"/home/sam/IdeaProjects/affinytix-stream-kafka","home":"/home/sam","language":"en","name":"sam","timezone":"Asia/Jerusalem"}}))
2016-08-02 05:38:08 INFO ProducerConfig:165 - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:11111]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id = testing-client
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 1

2016-08-02 05:38:08 INFO AppInfoParser:82 - Kafka version : 0.9.0.1
2016-08-02 05:38:08 INFO AppInfoParser:83 - Kafka commitId : 23c69d62a0cabf06
2016-08-02 05:38:08 INFO TestSendingReceiving:100 - Before sending the string
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:setData cxid:0x40 zxid:0x18 txntype:-1 reqpath:n/a Error Path:/config/topics/test Error:KeeperErrorCode = NoNode for /config/topics/test
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x41 zxid:0x19 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
2016-08-02 05:38:08 INFO AdminUtils$:68 - Topic creation {"version":1,"partitions":{"0":[0]}}
2016-08-02 05:38:08 INFO KafkaApis:68 - [KafkaApi-0] Auto creation of topic test with 1 partitions and replication factor 1 is successful!
2016-08-02 05:38:08 INFO PartitionStateMachine$TopicChangeListener:68 - [TopicChangeListener on Controller 0]: New topics: [Set(test)], deleted topics: [Set()], new partition replica assignment [Map([test,0] -> List(0))]
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: New topic creation callback for [test,0]
2016-08-02 05:38:08 INFO KafkaController:68 - [Controller 0]: New partition creation callback for [test,0]
2016-08-02 05:38:08 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Invoking state change to NewPartition for partitions [test,0]
2016-08-02 05:38:08 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Invoking state change to NewReplica for replicas [Topic=test,Partition=0,Replica=0]
2016-08-02 05:38:08 WARN NetworkClient:582 - Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE}
2016-08-02 05:38:08 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions [test,0]
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x49 zxid:0x1c txntype:-1 reqpath:n/a Error Path:/brokers/topics/test/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test/partitions/0
2016-08-02 05:38:08 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x4a zxid:0x1d txntype:-1 reqpath:n/a Error Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test/partitions
2016-08-02 05:38:08 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Invoking state change to OnlineReplica for replicas [Topic=test,Partition=0,Replica=0]
2016-08-02 05:38:08 WARN NetworkClient:582 - Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE}
2016-08-02 05:38:08 INFO ReplicaFetcherManager:68 - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [test,0]
2016-08-02 05:38:08 INFO Log:68 - Completed load of log test-0 with log end offset 0
2016-08-02 05:38:08 INFO LogManager:68 - Created log for partition [test,0] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> producer, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:08 INFO Partition:68 - Partition [test,0] on broker 0: No checkpointed highwatermark is found for partition [test,0]
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='0', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 0
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='1', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 1
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='2', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 2
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='3', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 3
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='4', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 4
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='5', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 5
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='6', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 6
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='7', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 7
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='8', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 8
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='9', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 9
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='10', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 10
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='11', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 11
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='12', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 12
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='13', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 13
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='14', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 14
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='15', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 15
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='16', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 16
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='17', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 17
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='18', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 18
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='19', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 19
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='20', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 20
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='21', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 21
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='22', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 22
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='23', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 23
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='24', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 24
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='25', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 25
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='26', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 26
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='27', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 27
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='28', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 28
2016-08-02 05:38:09 INFO MetaDataLoginCbk:19 - MetaDataLoginCbk{offset='29', partition='0', topicPartition='test'}
2016-08-02 05:38:09 INFO TestSendingReceiving:91 - receive the call
2016-08-02 05:38:09 INFO TestSendingReceiving:93 - current offset: 29
2016-08-02 05:38:11 INFO TestSendingReceiving:104 - After sending the string
2016-08-02 05:38:11 INFO ConfigLoader:99 - Load conf at : kafka-consumer-1.json
2016-08-02 05:38:11 INFO ConfigLoader:100 - Config(SimpleConfigObject({"awt":{"toolkit":"sun.awt.X11.XToolkit"},"file":{"encoding":{"pkg":"sun.io"},"separator":"/"},"idea":{"launcher":{"bin":{"path":"/opt/idea-IC-162.1121.32/bin"},"port":"7532"}},"java":{"awt":{"graphicsenv":"sun.awt.X11GraphicsEnvironment","printerjob":"sun.print.PSPrinterJob"},"class":{"path":"/home/sam/.IdeaIC2016.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/test-classes:/home/sam/IdeaProjects/affinytix-stream-kafka/target/scala-2.10/classes:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-library.jar:/home/sam/.ivy2/cache/xml-apis/xml-apis/jars/xml-apis-1.3.04.jar:/home/sam/.ivy2/cache/xmlenc/xmlenc/jars/xmlenc-0.52.jar:/home/sam/.ivy2/cache/xerces/xercesImpl/jars/xercesImpl-2.9.1.jar:/home/sam/.ivy2/cache/org.sonatype.sisu.inject/cglib/jars/cglib-2.2.1-v20090111.jar:/home/sam/.ivy2/cache/org.fusesource.leveldbjni/leveldbjni-all/bundles/leveldbjni-all-1.8.jar:/home/sam/.ivy2/cache/org.codehaus.jettison/jettison/bundles/jettison-1.1.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-xc/jars/jackson-xc-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-jaxrs/jars/jackson-jaxrs-1.9.13.jar:/home/sam/.ivy2/cache/org.apache.zookeeper/zookeeper/jars/zookeeper-3.4.6.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpcore/jars/httpcore-4.2.4.jar:/home/sam/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.2.5.jar:/home/sam/.ivy2/cache/org.apache.htrace/htrace-core/jars/htrace-core-3.1.0-incubating.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-server-common/jars/hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-registry/jars/hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-common/jars/hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-client/jars/hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/jars/hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/jars/hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-jobclient/jars/hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-core/jars/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-common/jars/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-app/jars/hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/jars/hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-client/jars/hadoop-client-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-auth/jars/hadoop-auth-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.hadoop/hadoop-annotations/jars/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-kerberos-codec/bundles/apacheds-kerberos-codec-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.server/apacheds-i18n/bundles/apacheds-i18n-2.0.0-M15.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-util/bundles/api-util-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.directory.api/api-asn1-api/bundles/api-asn1-api-1.0.0-M20.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-recipes/bundles/curator-recipes-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-framework/bundles/curator-framework-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-client/bundles/curator-client-2.7.1.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.1.1.jar:/home/sam/.ivy2/cache/log4j/log4j/bundles/log4j-1.2.17.jar:/home/sam/.ivy2/cache/junit/junit/jars/junit-3.8.1.jar:/home/sam/.ivy2/cache/javax.xml.stream/stax-api/jars/stax-api-1.0-2.jar:/home/sam/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.2.2.jar:/home/sam/.ivy2/cache/javax.servlet.jsp/jsp-api/jars/jsp-api-2.1.jar:/home/sam/.ivy2/cache/javax.servlet/servlet-api/jars/servlet-api-2.5.jar:/home/sam/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:/home/sam/.ivy2/cache/javax.activation/activation/jars/activation-1.1.jar:/home/sam/.ivy2/cache/io.netty/netty-all/jars/netty-all-4.0.23.Final.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.7.0.Final.jar:/home/sam/.ivy2/cache/commons-net/commons-net/jars/commons-net-3.1.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.3.jar:/home/sam/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar:/home/sam/.ivy2/cache/commons-digester/commons-digester/jars/commons-digester-1.8.jar:/home/sam/.ivy2/cache/commons-configuration/commons-configuration/jars/commons-configuration-1.6.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.2.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils-core/jars/commons-beanutils-core-1.8.0.jar:/home/sam/.ivy2/cache/commons-beanutils/commons-beanutils/jars/commons-beanutils-1.7.0.jar:/home/sam/.ivy2/cache/com.sun.xml.bind/jaxb-impl/jars/jaxb-impl-2.2.3-1.jar:/home/sam/.ivy2/cache/com.sun.jersey.contribs/jersey-guice/jars/jersey-guice-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-server/bundles/jersey-server-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-json/bundles/jersey-json-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-core/bundles/jersey-core-1.9.jar:/home/sam/.ivy2/cache/com.sun.jersey/jersey-client/bundles/jersey-client-1.9.jar:/home/sam/.ivy2/cache/com.squareup.okio/okio/jars/okio-1.4.0.jar:/home/sam/.ivy2/cache/com.squareup.okhttp/okhttp/jars/okhttp-2.4.0.jar:/home/sam/.ivy2/cache/com.microsoft.windowsazure.storage/microsoft-windowsazure-storage-sdk/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/home/sam/.ivy2/cache/com.jamesmurty.utils/java-xmlbuilder/jars/java-xmlbuilder-0.4.jar:/home/sam/.ivy2/cache/com.google.protobuf/protobuf-java/bundles/protobuf-java-2.5.0.jar:/home/sam/.ivy2/cache/com.google.inject/guice/jars/guice-3.0.jar:/home/sam/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.2.4.jar:/home/sam/.ivy2/cache/com.google.code.findbugs/jsr305/jars/jsr305-3.0.0.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-common/jars/hadoop-mini-clusters-common-0.1.7.jar:/home/sam/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/jars/jackson-core-2.2.3.jar:/home/sam/.ivy2/cache/asm/asm/jars/asm-3.1.jar:/home/sam/.ivy2/cache/aopalliance/aopalliance/jars/aopalliance-1.0.jar:/home/sam/.ivy2/cache/com.affinytix.model/affinytix-model-msg_2.10/jars/affinytix-model-msg_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.google.guava/guava/bundles/guava-19.0.jar:/home/sam/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.2.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.1.jar:/home/sam/.sbt/boot/scala-2.10.6/lib/scala-reflect.jar:/home/sam/.ivy2/cache/org.scalatest/scalatest_2.10/bundles/scalatest_2.10-2.2.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar:/home/sam/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/home/sam/.ivy2/cache/com.affinytix.exception/affinytix-exception_2.10/jars/affinytix-exception_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.affinytix.util/affinytix-util_2.10/jars/affinytix-util_2.10-1.1.0.jar:/home/sam/.ivy2/cache/com.affinytix.stream/affinytix-stream-serializer_2.10/jars/affinytix-stream-serializer_2.10-1.0.0.jar:/home/sam/.ivy2/cache/com.github.stephenc.findbugs/findbugs-annotations/jars/findbugs-annotations-1.3.9-1.jar:/home/sam/.ivy2/cache/com.thoughtworks.paranamer/paranamer/bundles/paranamer-2.7.jar:/home/sam/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.2.jar:/home/sam/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.9.jar:/home/sam/.ivy2/cache/commons-collections/commons-collections/jars/commons-collections-3.2.1.jar:/home/sam/.ivy2/cache/commons-httpclient/commons-httpclient/jars/commons-httpclient-3.1.jar:/home/sam/.ivy2/cache/commons-lang/commons-lang/jars/commons-lang-2.6.jar:/home/sam/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:/home/sam/.ivy2/cache/io.netty/netty/bundles/netty-3.5.13.Final.jar:/home/sam/.ivy2/cache/joda-time/joda-time/jars/joda-time-2.7.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.7.jar:/home/sam/.ivy2/cache/org.apache.avro/avro/jars/avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-compiler/bundles/avro-compiler-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-ipc/jars/avro-ipc-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-mapred/jars/avro-mapred-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/avro-tools/jars/avro-tools-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-avro/jars/trevni-avro-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.avro/trevni-core/jars/trevni-core-1.8.1-tests.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-compress/jars/commons-compress-1.8.1.jar:/home/sam/.ivy2/cache/org.apache.velocity/velocity/jars/velocity-1.7.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-core-asl/jars/jackson-core-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/jars/jackson-mapper-asl-1.9.13.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty/jars/jetty-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/jetty-util/jars/jetty-util-6.1.26.jar:/home/sam/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211.jar:/home/sam/.ivy2/cache/org.tukaani/xz/jars/xz-1.5.jar:/home/sam/.ivy2/cache/com.101tec/zkclient/jars/zkclient-0.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-kafka/jars/hadoop-mini-clusters-kafka-0.1.7.jar:/home/sam/.ivy2/cache/com.github.sakserv/hadoop-mini-clusters-zookeeper/jars/hadoop-mini-clusters-zookeeper-0.1.7.jar:/home/sam/.ivy2/cache/com.yammer.metrics/metrics-core/jars/metrics-core-2.2.0.jar:/home/sam/.ivy2/cache/net.sf.jopt-simple/jopt-simple/jars/jopt-simple-4.9.jar:/home/sam/.ivy2/cache/org.apache.commons/commons-math/jars/commons-math-2.2.jar:/home/sam/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.5.0.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.apache.kafka/kafka_2.10/jars/kafka_2.10-0.9.0.2.4.2.0-258.jar:/home/sam/.ivy2/cache/org.javassist/javassist/bundles/javassist-3.18.1-GA.jar:/home/sam/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.6.jar:/home/sam/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.2.jar:/opt/idea-IC-162.1121.32/lib/idea_rt.jar","version":"52.0"},"endorsed":{"dirs":"/usr/lib/jvm/java-8-oracle/jre/lib/endorsed"},"ext":{"dirs":"/usr/lib/jvm/java-8-oracle/jre/lib/ext:/usr/java/packages/lib/ext"},"home":"/usr/lib/jvm/java-8-oracle/jre","io":{"tmpdir":"/tmp"},"library":{"path":"/opt/idea-IC-162.1121.32/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"},"runtime":{"name":"Java(TM) SE Runtime Environment","version":"1.8.0_101-b13"},"specification":{"name":"Java Platform API Specification","vendor":"Oracle Corporation","version":"1.8"},"vendor":{"url":{"bug":"http://bugreport.sun.com/bugreport/"}},"version":"1.8.0_101","vm":{"info":"mixed mode","name":"Java HotSpot(TM) 64-Bit Server VM","specification":{"name":"Java Virtual Machine Specification","vendor":"Oracle Corporation","version":"1.8"},"vendor":"Oracle Corporation","version":"25.101-b13"}},"kafka-consumers":[{"auto_commit_interval_ms":"10","bootstrap_servers":["localhost:11111","localhost:11112","localhost:11113"],"client_id":"server.reco.top","enable_auto_commit":"true","group_id":"my test group","key_deserializer":"org.apache.kafka.common.serialization.StringDeserializer","name":"my consumer testing configuration","session_timeout_ms":"30000","topic":["test"],"value_deserializer":"org.apache.kafka.common.serialization.StringDeserializer"}],"line":{"separator":"\n"},"os":{"arch":"amd64","name":"Linux","version":"4.4.0-31-generic"},"path":{"separator":":"},"sun":{"arch":{"data":{"model":"64"}},"boot":{"class":{"path":"/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/classes"},"library":{"path":"/usr/lib/jvm/java-8-oracle/jre/lib/amd64"}},"cpu":{"endian":"little","isalist":""},"desktop":"gnome","io":{"unicode":{"encoding":"UnicodeLittle"}},"java":{"command":"com.intellij.rt.execution.application.AppMain org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner -s TestSendingReceiving -testName A receiver should be able to consume on on topic with one partition -showProgressMessages true -C org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestReporter","launcher":"SUN_STANDARD"},"jnu":{"encoding":"UTF-8"},"management":{"compiler":"HotSpot 64-Bit Tiered Compilers"},"os":{"patch":{"level":"unknown"}}},"user":{"country":"US","dir":"/home/sam/IdeaProjects/affinytix-stream-kafka","home":"/home/sam","language":"en","name":"sam","timezone":"Asia/Jerusalem"}}))
2016-08-02 05:38:11 INFO ConsumerConfig:165 - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = my test group
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:11111, localhost:11112, localhost:11113]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = server.reco.top
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 10
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = latest

2016-08-02 05:38:11 INFO AppInfoParser:82 - Kafka version : 0.9.0.1
2016-08-02 05:38:11 INFO AppInfoParser:83 - Kafka commitId : 23c69d62a0cabf06
2016-08-02 05:38:11 INFO ConsumerConfig:165 - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = my test group
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:11111, localhost:11112, localhost:11113]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = server.reco.top
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 10
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = latest

2016-08-02 05:38:11 INFO AppInfoParser:82 - Kafka version : 0.9.0.1
2016-08-02 05:38:11 INFO AppInfoParser:83 - Kafka commitId : 23c69d62a0cabf06
2016-08-02 05:38:11 INFO MaxConsumerReceiver:42 - Created consumer: [topic:test, partition:0]
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:setData cxid:0x54 zxid:0x21 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x55 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
2016-08-02 05:38:11 INFO AdminUtils$:68 - Topic creation {"version":1,"partitions":{"45":[0],"34":[0],"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"40":[0],"15":[0],"11":[0],"9":[0],"44":[0],"33":[0],"22":[0],"26":[0],"37":[0],"13":[0],"46":[0],"24":[0],"35":[0],"16":[0],"5":[0],"10":[0],"48":[0],"21":[0],"43":[0],"32":[0],"49":[0],"6":[0],"36":[0],"1":[0],"39":[0],"17":[0],"25":[0],"14":[0],"47":[0],"31":[0],"42":[0],"0":[0],"20":[0],"27":[0],"2":[0],"38":[0],"18":[0],"30":[0],"7":[0],"29":[0],"41":[0],"3":[0],"28":[0]}}
2016-08-02 05:38:11 INFO KafkaApis:68 - [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful!
2016-08-02 05:38:11 INFO PartitionStateMachine$TopicChangeListener:68 - [TopicChangeListener on Controller 0]: New topics: [Set(__consumer_offsets)], deleted topics: [Set()], new partition replica assignment [Map([__consumer_offsets,19] -> List(0), [__consumer_offsets,30] -> List(0), [__consumer_offsets,47] -> List(0), [__consumer_offsets,29] -> List(0), [__consumer_offsets,41] -> List(0), [__consumer_offsets,39] -> List(0), [__consumer_offsets,10] -> List(0), [__consumer_offsets,17] -> List(0), [__consumer_offsets,14] -> List(0), [__consumer_offsets,40] -> List(0), [__consumer_offsets,18] -> List(0), [__consumer_offsets,26] -> List(0), [__consumer_offsets,0] -> List(0), [__consumer_offsets,24] -> List(0), [__consumer_offsets,33] -> List(0), [__consumer_offsets,20] -> List(0), [__consumer_offsets,21] -> List(0), [__consumer_offsets,3] -> List(0), [__consumer_offsets,5] -> List(0), [__consumer_offsets,22] -> List(0), [__consumer_offsets,12] -> List(0), [__consumer_offsets,8] -> List(0), [__consumer_offsets,23] -> List(0), [__consumer_offsets,15] -> List(0), [__consumer_offsets,48] -> List(0), [__consumer_offsets,11] -> List(0), [__consumer_offsets,13] -> List(0), [__consumer_offsets,49] -> List(0), [__consumer_offsets,6] -> List(0), [__consumer_offsets,28] -> List(0), [__consumer_offsets,4] -> List(0), [__consumer_offsets,37] -> List(0), [__consumer_offsets,31] -> List(0), [__consumer_offsets,44] -> List(0), [__consumer_offsets,42] -> List(0), [__consumer_offsets,34] -> List(0), [__consumer_offsets,46] -> List(0), [__consumer_offsets,25] -> List(0), [__consumer_offsets,45] -> List(0), [__consumer_offsets,27] -> List(0), [__consumer_offsets,32] -> List(0), [__consumer_offsets,43] -> List(0), [__consumer_offsets,36] -> List(0), [__consumer_offsets,35] -> List(0), [__consumer_offsets,7] -> List(0), [__consumer_offsets,9] -> List(0), [__consumer_offsets,38] -> List(0), [__consumer_offsets,1] -> List(0), [__consumer_offsets,16] -> List(0), [__consumer_offsets,2] -> List(0))]
2016-08-02 05:38:11 INFO KafkaController:68 - [Controller 0]: New topic creation callback for [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30]
2016-08-02 05:38:11 INFO KafkaController:68 - [Controller 0]: New partition creation callback for [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30]
2016-08-02 05:38:11 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Invoking state change to NewPartition for partitions [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30]
2016-08-02 05:38:11 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Invoking state change to NewReplica for replicas [Topic=__consumer_offsets,Partition=25,Replica=0],[Topic=__consumer_offsets,Partition=12,Replica=0],[Topic=__consumer_offsets,Partition=31,Replica=0],[Topic=__consumer_offsets,Partition=40,Replica=0],[Topic=__consumer_offsets,Partition=35,Replica=0],[Topic=__consumer_offsets,Partition=9,Replica=0],[Topic=__consumer_offsets,Partition=43,Replica=0],[Topic=__consumer_offsets,Partition=2,Replica=0],[Topic=__consumer_offsets,Partition=11,Replica=0],[Topic=__consumer_offsets,Partition=29,Replica=0],[Topic=__consumer_offsets,Partition=30,Replica=0],[Topic=__consumer_offsets,Partition=4,Replica=0],[Topic=__consumer_offsets,Partition=42,Replica=0],[Topic=__consumer_offsets,Partition=26,Replica=0],[Topic=__consumer_offsets,Partition=34,Replica=0],[Topic=__consumer_offsets,Partition=17,Replica=0],[Topic=__consumer_offsets,Partition=37,Replica=0],[Topic=__consumer_offsets,Partition=27,Replica=0],[Topic=__consumer_offsets,Partition=10,Replica=0],[Topic=__consumer_offsets,Partition=41,Replica=0],[Topic=__consumer_offsets,Partition=20,Replica=0],[Topic=__consumer_offsets,Partition=28,Replica=0],[Topic=__consumer_offsets,Partition=46,Replica=0],[Topic=__consumer_offsets,Partition=39,Replica=0],[Topic=__consumer_offsets,Partition=47,Replica=0],[Topic=__consumer_offsets,Partition=49,Replica=0],[Topic=__consumer_offsets,Partition=22,Replica=0],[Topic=__consumer_offsets,Partition=1,Replica=0],[Topic=__consumer_offsets,Partition=24,Replica=0],[Topic=__consumer_offsets,Partition=6,Replica=0],[Topic=__consumer_offsets,Partition=36,Replica=0],[Topic=__consumer_offsets,Partition=8,Replica=0],[Topic=__consumer_offsets,Partition=38,Replica=0],[Topic=__consumer_offsets,Partition=16,Replica=0],[Topic=__consumer_offsets,Partition=21,Replica=0],[Topic=__consumer_offsets,Partition=18,Replica=0],[Topic=__consumer_offsets,Partition=0,Replica=0],[Topic=__consumer_offsets,Partition=48,Replica=0],[Topic=__consumer_offsets,Partition=5,Replica=0],[Topic=__consumer_offsets,Partition=13,Replica=0],[Topic=__consumer_offsets,Partition=3,Replica=0],[Topic=__consumer_offsets,Partition=44,Replica=0],[Topic=__consumer_offsets,Partition=15,Replica=0],[Topic=__consumer_offsets,Partition=7,Replica=0],[Topic=__consumer_offsets,Partition=19,Replica=0],[Topic=__consumer_offsets,Partition=33,Replica=0],[Topic=__consumer_offsets,Partition=45,Replica=0],[Topic=__consumer_offsets,Partition=23,Replica=0],[Topic=__consumer_offsets,Partition=32,Replica=0],[Topic=__consumer_offsets,Partition=14,Replica=0]
2016-08-02 05:38:11 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30]
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x92 zxid:0x25 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x93 zxid:0x26 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x98 zxid:0x2a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x9c zxid:0x2d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49
2016-08-02 05:38:11 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x9f zxid:0x30 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xa3 zxid:0x33 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xa7 zxid:0x36 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xaa zxid:0x39 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xad zxid:0x3c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xb2 zxid:0x3f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xb5 zxid:0x42 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xb8 zxid:0x45 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xbd zxid:0x48 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xc0 zxid:0x4b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xc5 zxid:0x4e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xc8 zxid:0x51 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xcb zxid:0x54 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xd0 zxid:0x57 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xd3 zxid:0x5a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xd6 zxid:0x5d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xdb zxid:0x60 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xde zxid:0x63 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xe2 zxid:0x66 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xe6 zxid:0x69 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xea zxid:0x6c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xee zxid:0x6f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xf1 zxid:0x72 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xf6 zxid:0x75 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xf9 zxid:0x78 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0xfc zxid:0x7b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11
2016-08-02 05:38:12 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x100 zxid:0x7e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x104 zxid:0x81 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x107 zxid:0x84 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x10c zxid:0x87 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x10f zxid:0x8a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x114 zxid:0x8d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x117 zxid:0x90 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x11a zxid:0x93 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x11f zxid:0x96 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x122 zxid:0x99 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x125 zxid:0x9c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x12a zxid:0x9f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x12d zxid:0xa2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x131 zxid:0xa5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x135 zxid:0xa8 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x138 zxid:0xab txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x13d zxid:0xae txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x140 zxid:0xb1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x143 zxid:0xb4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x147 zxid:0xb7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26
2016-08-02 05:38:13 INFO PrepRequestProcessor:645 - Got user-level KeeperException when processing sessionid:0x156491d83840000 type:create cxid:0x14b zxid:0xba txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30
2016-08-02 05:38:13 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Invoking state change to OnlineReplica for replicas [Topic=__consumer_offsets,Partition=25,Replica=0],[Topic=__consumer_offsets,Partition=12,Replica=0],[Topic=__consumer_offsets,Partition=31,Replica=0],[Topic=__consumer_offsets,Partition=40,Replica=0],[Topic=__consumer_offsets,Partition=35,Replica=0],[Topic=__consumer_offsets,Partition=9,Replica=0],[Topic=__consumer_offsets,Partition=43,Replica=0],[Topic=__consumer_offsets,Partition=2,Replica=0],[Topic=__consumer_offsets,Partition=11,Replica=0],[Topic=__consumer_offsets,Partition=29,Replica=0],[Topic=__consumer_offsets,Partition=30,Replica=0],[Topic=__consumer_offsets,Partition=4,Replica=0],[Topic=__consumer_offsets,Partition=42,Replica=0],[Topic=__consumer_offsets,Partition=26,Replica=0],[Topic=__consumer_offsets,Partition=34,Replica=0],[Topic=__consumer_offsets,Partition=17,Replica=0],[Topic=__consumer_offsets,Partition=37,Replica=0],[Topic=__consumer_offsets,Partition=27,Replica=0],[Topic=__consumer_offsets,Partition=10,Replica=0],[Topic=__consumer_offsets,Partition=41,Replica=0],[Topic=__consumer_offsets,Partition=20,Replica=0],[Topic=__consumer_offsets,Partition=28,Replica=0],[Topic=__consumer_offsets,Partition=46,Replica=0],[Topic=__consumer_offsets,Partition=39,Replica=0],[Topic=__consumer_offsets,Partition=47,Replica=0],[Topic=__consumer_offsets,Partition=49,Replica=0],[Topic=__consumer_offsets,Partition=22,Replica=0],[Topic=__consumer_offsets,Partition=1,Replica=0],[Topic=__consumer_offsets,Partition=24,Replica=0],[Topic=__consumer_offsets,Partition=6,Replica=0],[Topic=__consumer_offsets,Partition=36,Replica=0],[Topic=__consumer_offsets,Partition=8,Replica=0],[Topic=__consumer_offsets,Partition=38,Replica=0],[Topic=__consumer_offsets,Partition=16,Replica=0],[Topic=__consumer_offsets,Partition=21,Replica=0],[Topic=__consumer_offsets,Partition=18,Replica=0],[Topic=__consumer_offsets,Partition=0,Replica=0],[Topic=__consumer_offsets,Partition=48,Replica=0],[Topic=__consumer_offsets,Partition=5,Replica=0],[Topic=__consumer_offsets,Partition=13,Replica=0],[Topic=__consumer_offsets,Partition=3,Replica=0],[Topic=__consumer_offsets,Partition=44,Replica=0],[Topic=__consumer_offsets,Partition=15,Replica=0],[Topic=__consumer_offsets,Partition=7,Replica=0],[Topic=__consumer_offsets,Partition=19,Replica=0],[Topic=__consumer_offsets,Partition=33,Replica=0],[Topic=__consumer_offsets,Partition=45,Replica=0],[Topic=__consumer_offsets,Partition=23,Replica=0],[Topic=__consumer_offsets,Partition=32,Replica=0],[Topic=__consumer_offsets,Partition=14,Replica=0]
2016-08-02 05:38:13 INFO ReplicaFetcherManager:68 - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-0 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,0] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,0] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,0]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-29 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,29] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,29] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,29]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-48 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,48] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,48] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,48]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-10 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,10] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,10] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,10]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-45 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,45] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,45] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,45]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-26 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,26] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,26] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,26]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-7 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,7] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,7] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,7]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-42 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,42] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,42] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,42]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-4 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,4] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,4] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,4]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-23 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,23] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,23] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,23]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-1 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,1] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,1] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,1]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-39 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,39] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,39] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,39]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-20 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,20] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,20] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,20]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-17 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,17] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,17] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,17]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-36 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,36] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,36] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,36]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-14 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,14] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,14] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,14]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-33 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,33] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,33] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,33]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-49 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,49] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,49] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,49]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-11 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,11] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,11] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,11]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-30 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,30] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,30] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,30]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-46 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,46] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,46] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,46]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-27 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,27] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,27] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,27]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-8 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,8] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,8] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,8]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-24 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,24] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,24] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,24]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-43 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,43] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,43] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,43]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-5 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,5] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,5] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,5]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-21 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,21] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,21] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,21]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-40 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,40] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,40] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,40]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-2 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,2] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,2] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,2]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-37 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,37] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,37] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,37]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-18 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,18] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,18] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,18]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-15 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,15] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:13 INFO Partition:68 - Partition [__consumer_offsets,15] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,15]
2016-08-02 05:38:13 INFO Log:68 - Completed load of log __consumer_offsets-34 with log end offset 0
2016-08-02 05:38:13 INFO LogManager:68 - Created log for partition [__consumer_offsets,34] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,34] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,34]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-12 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,12] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,12] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,12]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-31 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,31] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,31] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,31]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-9 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,9] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,9] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,9]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-47 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,47] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,47] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,47]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-19 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,19] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,19] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,19]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-28 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,28] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,28] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,28]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-38 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,38] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,38] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,38]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-35 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,35] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,35] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,35]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-44 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,44] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,44] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,44]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-6 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,6] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,6] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,6]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-25 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,25] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,25] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,25]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-16 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,16] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,16] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,16]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-22 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,22] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,22] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,22]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-41 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,41] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,41] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,41]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-32 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,32] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,32] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,32]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-3 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,3] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,3] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,3]
2016-08-02 05:38:14 INFO Log:68 - Completed load of log __consumer_offsets-13 with log end offset 0
2016-08-02 05:38:14 INFO LogManager:68 - Created log for partition [__consumer_offsets,13] in /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka with properties {compression.type -> uncompressed, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2016-08-02 05:38:14 INFO Partition:68 - Partition [__consumer_offsets,13] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,13]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,22]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,22] in 19 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,25]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,25] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,28]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,28] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,31]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,31] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,34]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,34] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,37]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,37] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,40]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,40] in 2 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,43]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,43] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,46]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,46] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,49]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,49] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,41]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,41] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,44]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,44] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,47]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,47] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,1]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,1] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,4]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,4] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,7]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,7] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,10]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,10] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,13]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,13] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,16]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,16] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,19]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,19] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,2]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,2] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,5]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,5] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,8]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,8] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,11]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,11] in 7 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,14]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,14] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,17]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,17] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,20]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,20] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,23]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,23] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,26]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,26] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,29]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,29] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,32]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,32] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,35]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,35] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,38]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,38] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,0]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,0] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,3]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,3] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,6]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,6] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,9]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,9] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,12]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,12] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,15]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,15] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,18]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,18] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,21]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,21] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,24]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,24] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,27]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,27] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,30]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,30] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,33]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,33] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,36]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,36] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,39]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,39] in 5 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,42]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,42] in 0 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,45]
2016-08-02 05:38:14 ERROR KafkaApis:103 - [KafkaApi-0] error when handling request null
java.lang.ClassCastException: org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata cannot be cast to org.apache.kafka.common.requests.JoinGroupRequest$GroupProtocol
at kafka.server.KafkaApis$$anonfun$37.apply(KafkaApis.scala:788)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.server.KafkaApis.handleJoinGroupRequest(KafkaApis.scala:788)
at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
2016-08-02 05:38:14 INFO KafkaLocalBroker:195 - KAFKA: Stopping Kafka on port: 11111
2016-08-02 05:38:14 INFO KafkaServer:68 - [Kafka Server 0], shutting down
2016-08-02 05:38:14 INFO KafkaServer:68 - [Kafka Server 0], Starting controlled shutdown
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,45] in 1 milliseconds.
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,48]
2016-08-02 05:38:14 INFO GroupMetadataManager:68 - [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,48] in 1 milliseconds.
2016-08-02 05:38:14 INFO KafkaController:68 - [Controller 0]: Shutting down broker 0
2016-08-02 05:38:14 INFO KafkaServer:68 - [Kafka Server 0], Controlled shutdown succeeded
2016-08-02 05:38:14 INFO SocketServer:68 - [Socket Server on Broker 0], Shutting down
2016-08-02 05:38:14 INFO SocketServer:68 - [Socket Server on Broker 0], Shutdown completed
2016-08-02 05:38:14 INFO KafkaRequestHandlerPool:68 - [Kafka Request Handler on Broker 0], shutting down
2016-08-02 05:38:14 INFO KafkaRequestHandlerPool:68 - [Kafka Request Handler on Broker 0], shut down completely
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Produce], Shutting down
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Produce], Stopped
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Produce], Shutdown completed
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Fetch], Shutting down
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Fetch], Stopped
2016-08-02 05:38:14 INFO ClientQuotaManager$ThrottledRequestReaper:68 - [ThrottledRequestReaper-Fetch], Shutdown completed
2016-08-02 05:38:14 INFO KafkaApis:68 - [KafkaApi-0] Shutdown complete.
2016-08-02 05:38:14 INFO ReplicaManager:68 - [Replica Manager on Broker 0]: Shutting down
2016-08-02 05:38:14 INFO ReplicaFetcherManager:68 - [ReplicaFetcherManager on broker 0] shutting down
2016-08-02 05:38:14 INFO ReplicaFetcherManager:68 - [ReplicaFetcherManager on broker 0] shutdown completed
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutting down
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Stopped
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutdown completed
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutting down
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Stopped
2016-08-02 05:38:14 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutdown completed
2016-08-02 05:38:14 INFO ReplicaManager:68 - [Replica Manager on Broker 0]: Shut down completely
2016-08-02 05:38:14 INFO LogManager:68 - Shutting down.
2016-08-02 05:38:14 INFO LogCleaner:68 - Shutting down the log cleaner.
2016-08-02 05:38:14 INFO LogCleaner:68 - [kafka-log-cleaner-thread-0], Shutting down
2016-08-02 05:38:14 INFO LogCleaner:68 - [kafka-log-cleaner-thread-0], Stopped
2016-08-02 05:38:14 INFO LogCleaner:68 - [kafka-log-cleaner-thread-0], Shutdown completed
2016-08-02 05:38:15 INFO LogManager:68 - Shutdown complete.
2016-08-02 05:38:15 INFO GroupCoordinator:68 - [GroupCoordinator 0]: Shutting down.
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutting down
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Stopped
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutdown completed
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutting down
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Stopped
2016-08-02 05:38:15 INFO DelayedOperationPurgatory$ExpiredOperationReaper:68 - [ExpirationReaper-0], Shutdown completed
2016-08-02 05:38:15 INFO GroupCoordinator:68 - [GroupCoordinator 0]: Shutdown complete.
2016-08-02 05:38:15 INFO PartitionStateMachine:68 - [Partition state machine on Controller 0]: Stopped partition state machine
2016-08-02 05:38:15 INFO ReplicaStateMachine:68 - [Replica state machine on controller 0]: Stopped replica state machine
2016-08-02 05:38:15 INFO RequestSendThread:68 - [kafka-mini-cluster:Controller-0-to-broker-0-send-thread], Shutting down
2016-08-02 05:38:15 INFO RequestSendThread:68 - [kafka-mini-cluster:Controller-0-to-broker-0-send-thread], Stopped
2016-08-02 05:38:15 INFO RequestSendThread:68 - [kafka-mini-cluster:Controller-0-to-broker-0-send-thread], Shutdown completed
2016-08-02 05:38:15 INFO KafkaController:68 - [Controller 0]: Broker 0 resigned as the controller
2016-08-02 05:38:15 INFO ZkEventThread:82 - Terminate ZkClient event thread.
2016-08-02 05:38:15 INFO PrepRequestProcessor:494 - Processed session termination for sessionid: 0x156491d83840000
2016-08-02 05:38:15 INFO ZooKeeper:684 - Session: 0x156491d83840000 closed
2016-08-02 05:38:15 INFO ClientCnxn:524 - EventThread shut down
2016-08-02 05:38:15 INFO NIOServerCnxn:1007 - Closed socket connection for client /127.0.0.1:37628 which had sessionid 0x156491d83840000
2016-08-02 05:38:15 INFO KafkaServer:68 - [Kafka Server 0], shut down completed
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-36
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-3
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-41
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-9
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-20
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-11
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-22
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-7
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-5
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-19
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-12
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-21
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-8
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-40
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-25
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-45
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-18
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-42
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-0
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-10
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-47
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-31
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-38
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-29
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-6
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-24
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-16
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-13
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-44
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/test-0
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-14
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-39
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-15
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-27
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-35
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-32
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-43
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-46
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-37
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-49
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-48
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-2
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-1
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-34
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-28
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-17
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-23
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-26
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-4
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-30
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_kafka/__consumer_offsets-33
2016-08-02 05:38:15 INFO ZookeeperLocalCluster:187 - ZOOKEEPER: Stopping Zookeeper on port: 23456
2016-08-02 05:38:15 INFO NIOServerCnxnFactory:224 - NIOServerCnxn factory exited run method
2016-08-02 05:38:15 INFO ZooKeeperServer:441 - shutting down
2016-08-02 05:38:15 INFO SessionTrackerImpl:225 - Shutting down
2016-08-02 05:38:15 INFO PrepRequestProcessor:761 - Shutting down
2016-08-02 05:38:15 INFO SyncRequestProcessor:209 - Shutting down
2016-08-02 05:38:15 INFO PrepRequestProcessor:143 - PrepRequestProcessor exited loop!
2016-08-02 05:38:15 INFO ZooKeeperServer:441 - shutting down
2016-08-02 05:38:15 INFO SessionTrackerImpl:225 - Shutting down
2016-08-02 05:38:15 INFO SyncRequestProcessor:187 - SyncRequestProcessor exited!
2016-08-02 05:38:15 INFO PrepRequestProcessor:761 - Shutting down
2016-08-02 05:38:15 INFO FinalRequestProcessor:415 - shutdown of request processor complete
2016-08-02 05:38:15 INFO SyncRequestProcessor:209 - Shutting down
2016-08-02 05:38:15 INFO FinalRequestProcessor:415 - shutdown of request processor complete
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_zookeeper
2016-08-02 05:38:15 INFO FileUtils:17 - FILEUTILS: Deleting contents of directory: /home/sam/IdeaProjects/affinytix-stream-kafka/embedded_zookeeper/version-2
2016-08-02 05:38:15 INFO TestSendingReceiving:75 - /home/sam/IdeaProjects/affinytix-stream-kafka

Unexpected error in join group response: The server experienced an unexpected error when processing the request
org.apache.kafka.common.KafkaException: Unexpected error in join group response: The server experienced an unexpected error when processing the request
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:376)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:324)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
at com.affinytix.stream.kafka.consumer.partition.BasicPartitionConsumer.consumer(BasicPartitionConsumer.java:32)
at com.affinytix.stream.kafka.consumer.receiver.MaxConsumerReceiver.process(MaxConsumerReceiver.java:52)
at TestSendingReceiving$$anonfun$2.apply$mcV$sp(TestSendingReceiving.scala:118)
at TestSendingReceiving$$anonfun$2.apply(TestSendingReceiving.scala:82)
at TestSendingReceiving$$anonfun$2.apply(TestSendingReceiving.scala:82)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FlatSpecLike$$anon$1.apply(FlatSpecLike.scala:1647)
at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
at TestSendingReceiving.withFixture(TestSendingReceiving.scala:59)
at org.scalatest.FlatSpecLike$class.invokeWithFixture$1(FlatSpecLike.scala:1644)
at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FlatSpecLike$class.runTest(FlatSpecLike.scala:1656)
at TestSendingReceiving.org$scalatest$BeforeAndAfter$$super$runTest(TestSendingReceiving.scala:17)
at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
at TestSendingReceiving.runTest(TestSendingReceiving.scala:17)
at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:390)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:427)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FlatSpecLike$class.runTests(FlatSpecLike.scala:1714)
at org.scalatest.FlatSpec.runTests(FlatSpec.scala:1683)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FlatSpec.org$scalatest$FlatSpecLike$$super$run(FlatSpec.scala:1683)
at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FlatSpecLike$class.run(FlatSpecLike.scala:1760)
at TestSendingReceiving.org$scalatest$BeforeAndAfter$$super$run(TestSendingReceiving.scala:17)
at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
at TestSendingReceiving.run(TestSendingReceiving.scala:17)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
at org.scalatest.tools.Runner$.run(Runner.scala:883)
at org.scalatest.tools.Runner.run(Runner.scala)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

Process finished with exit code 0

Spark 2 with metastore

Hello @sakserv,

We are trying to connect with Spark 2 with 0.1.15v . I have exception:
2018-09-24 16:15:51 INFO JDO:87 - Exception thrown Identifier name is unresolved (not a static field) org.datanucleus.exceptions.NucleusUserException: Identifier name is unresolved (not a static field) at org.datanucleus.query.expression.PrimaryExpression.bind(PrimaryExpression.java:182) at org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:89) at org.datanucleus.query.compiler.JavaQueryCompiler.compileFilter(JavaQueryCompiler.java:526) at org.datanucleus.query.compiler.JDOQLCompiler.compile(JDOQLCompiler.java:116) at org.datanucleus.store.query.AbstractJDOQLQuery.compileGeneric(AbstractJDOQLQuery.java:370) at org.datanucleus.store.query.AbstractJDOQLQuery.compileInternal(AbstractJDOQLQuery.java:415) at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:238) at org.datanucleus.store.query.Query.executeQuery(Query.java:1805) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1733) at org.datanucleus.store.query.Query.execute(Query.java:1715) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:183) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:137) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:295) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5756) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5751) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5984) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5940) at com.github.sakserv.minicluster.impl.HiveLocalMetaStore$StartHiveLocalMetaStore.run(HiveLocalMetaStore.java:167) at java.lang.Thread.run(Thread.java:748)

Unable to reach individual services in dockerized mini-cluster

Hi @sakserv,

First of all, thank you for providing such an intuitive and easy-to-use testing environment for Hadoop clusters. We have been using hadoop-mini-clusters in our CI/CD toolchain for a few months now, and are very happy with the results so far.

The first version of our CI/CD toolchain is based on bash scripts responsible for packaging and instantiating the mini cluster for unit or integration tests, every time our GitLab CI is triggered on a new commit. It works well but takes a significant amount of time (like 10 minutes, mostly due to Maven dependencies and Oozie being downloaded every time). Dockerizing the whole mini cluster would let us spawn an instance much faster.

With this objective in mind, we have been trying to migrate our bash scripts to a single Dockerfile. We are now able to package and run the following ClustersLauncher.java:

package com.XXXXXX;

import com.github.sakserv.minicluster.impl.HdfsLocalCluster;
import com.github.sakserv.minicluster.impl.KnoxLocalCluster;
import com.github.sakserv.minicluster.impl.OozieLocalServer;
import com.github.sakserv.minicluster.impl.YarnLocalCluster;
import com.github.sakserv.minicluster.oozie.sharelib.Framework;
import com.github.sakserv.minicluster.oozie.sharelib.util.OozieShareLibUtil;
import com.google.common.collect.Lists;
import com.mycila.xmltool.XMLDoc;
import org.apache.hadoop.conf.Configuration;

public class ClustersLauncher {
    public static void main(String[] args) throws Exception {
        System.setProperty("hdp.release.version", "2.6.2.0");
        YarnLocalCluster yarnLocalCluster = new YarnLocalCluster.Builder()
                .setNumNodeManagers(1)
                .setNumLocalDirs(1)
                .setNumLogDirs(1)
                .setResourceManagerAddress("localhost:8050")
                .setResourceManagerHostname("localhost")
                .setResourceManagerSchedulerAddress("localhost:8030")
                .setResourceManagerResourceTrackerAddress("localhost:8025")
                .setResourceManagerWebappAddress("localhost:8088")
                .setUseInJvmContainerExecutor(false)
                .setConfig(new Configuration())
                .build();
        yarnLocalCluster.start();

        HdfsLocalCluster hdfsLocalCluster = new HdfsLocalCluster.Builder()
                .setHdfsNamenodePort(8020)
                .setHdfsNamenodeHttpPort(50070)
                .setHdfsTempDir("embedded_hdfs")
                .setHdfsNumDatanodes(1)
                .setHdfsEnablePermissions(false)
                .setHdfsFormat(true)
                .setHdfsEnableRunningUserAsProxyUser(true)
                .setHdfsConfig(new Configuration())
                .build();
        hdfsLocalCluster.start();

        OozieLocalServer oozieLocalServer = new OozieLocalServer.Builder()
                .setOozieTestDir("embedded_oozie")
                .setOozieHomeDir("oozie_home")
                .setOozieUsername(System.getProperty("user.name"))
                .setOozieGroupname("testgroup")
                .setOozieYarnResourceManagerAddress("localhost")
                .setOozieHdfsDefaultFs(hdfsLocalCluster.getHdfsConfig().get("fs.defaultFS"))
                .setOozieConf(hdfsLocalCluster.getHdfsConfig())
                .setOozieHdfsShareLibDir("/user/" + System.getProperty("user.name") + "/share/lib")
                .setOozieShareLibCreate(Boolean.TRUE)
                .setOozieLocalShareLibCacheDir("share_lib_cache")
                .setOoziePurgeLocalShareLibCache(Boolean.FALSE)
                .setOozieShareLibFrameworks(
                        Lists.newArrayList(Framework.MAPREDUCE_STREAMING, Framework.OOZIE, Framework.SPARK))
                .build();
        OozieShareLibUtil oozieShareLibUtil = new OozieShareLibUtil(
                oozieLocalServer.getOozieHdfsShareLibDir(),
                oozieLocalServer.getOozieShareLibCreate(),
                oozieLocalServer.getOozieLocalShareLibCacheDir(),
                oozieLocalServer.getOoziePurgeLocalShareLibCache(),
                hdfsLocalCluster.getHdfsFileSystemHandle(),
                oozieLocalServer.getOozieShareLibFrameworks());
        oozieShareLibUtil.createShareLib();
        oozieLocalServer.start();

        KnoxLocalCluster knoxCluster = new KnoxLocalCluster.Builder()
                .setPort(8443)
                .setPath("gateway")
                .setHomeDir("embedded_knox")
                .setCluster("default")
                .setTopology(XMLDoc.newDocument(true)
                        .addRoot("topology")
                        .addTag("gateway")
                        .addTag("provider")
                        .addTag("role").addText("authentication")
                        .addTag("enabled").addText("false")
                        .gotoParent()
                        .addTag("provider")
                        .addTag("role").addText("identity-assertion")
                        .addTag("enabled").addText("false")
                        .gotoParent()
                        .gotoParent()
                        .addTag("service")
                        .addTag("role").addText("NAMENODE")
                        .addTag("url").addText("hdfs://localhost:8020")
                        .gotoParent()
                        .addTag("service")
                        .addTag("role").addText("WEBHDFS")
                        .addTag("url").addText("http://localhost:50070/webhdfs")
                        .gotoParent()
                        .addTag("service")
                        .addTag("role").addText("OOZIE")
                        .addTag("url").addText("http://localhost:11000/oozie")
                        .gotoRoot().toString())
                .build();
        knoxCluster.start();
    }
}

However, we are unable to reach individual services outside of the container, for example after :

$ docker build --build-arg NEXUS_USERNAME=... --build-arg NEXUS_PASSWORD=... --build-arg HTTP_PROXY_HOST=... --build-arg HTTP_PROXY_PORT=... -t cicd-tools-mini-cluster .
$ docker run --name mini-cluster --rm -p 8088:8088 -p 11000:11000 -p 50070:50070 -p 8443:8443 cicd-tools-mini-cluster
$ curl --negotiate --include --user : "http://localhost:50070/webhdfs/v1/?op=GETHOMEDIRECTORY"
curl: (52) Empty reply from server

The same request works perfectly well from a shell inside the container, so we are pretty sure the mini cluster runs fine and the individual services are actually started :

$ docker exec -it mini-cluster /bin/bash
root@9a156193169d:/usr/src/app# curl --negotiate --include --user : "http://localhost:50070/webhdfs/v1/?op=GETHOMEDIRECTORY"
HTTP/1.1 200 OK
Cache-Control: no-cache
Expires: Tue, 25 Sep 2018 13:46:56 GMT
Date: Tue, 25 Sep 2018 13:46:56 GMT
Pragma: no-cache
Content-Type: application/json
X-FRAME-OPTIONS: SAMEORIGIN
Transfer-Encoding: chunked
Server: Jetty(6.1.x)

{"Path":"/user/dr.who"}

Any idea what might be going on? Is it related to the hard-coded localhost? Is there any way we can bind to 0.0.0.0 instead of localhost, so the services listen on all IP addresses and can send responses outside of the container?

Issue with wordcount MapReduce

First of all thank you for your work.
I have a problem with the mapreduce wordcount example.
If I put another word than "This" (for example "ten"), it does not count it correctly.
It seems that the reduce phase does not write correctly (entirely) the file on the Hdfs.
Is there something I'm not understanding correctly ?
Thank you.

Error Using HdfsLocalCluster

Hi,
I'm using These dependencies:

   "com.github.sakserv" % "hadoop-mini-clusters" % "0.1.8" 
   "com.github.sakserv" % "hadoop-mini-clusters-common" % "0.1.8" 
   "com.github.sakserv" % "hadoop-mini-clusters-hdfs" % "0.1.8"

running simple example getting the following error:

        HdfsLocalCluster hdfsLocalCluster = new HdfsLocalCluster.Builder()
                .setHdfsNamenodePort(12345)
                .setHdfsNamenodeHttpPort(12341)
                .setHdfsTempDir("embedded_hdfs")
                .setHdfsNumDatanodes(1)
                .setHdfsEnablePermissions(false)
                .setHdfsFormat(true)
                .setHdfsEnableRunningUserAsProxyUser(true)
                .setHdfsConfig(new Configuration())
                .build();

        hdfsLocalCluster.start();

I'm getting the following error:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/MiniDFSCluster$Builder
at com.github.sakserv.minicluster.impl.HdfsLocalCluster.start(HdfsLocalCluster.java:173)

Regards, Avihay

Add support for 2.5.0

These mini-clusters have been a huge help to us @sakserv - thanks for all you work!

Are there any plans to add support for the recently released HDP 2.5.0?

Support for System property of HADOOP_HOME in windows

I tried to run mini yarn cluster but while starting i got followed exception:

java.lang.UnsatisfiedLinkError: 
Can't load library: C:\develop\myfolder\..\.\windows_libs\null\lib\hadoop.dll

although that i have my HADOOP_HOME setup correctly.

java.lang.NoSuchMethodError when trying to use hadoop-mini-clusters-yarn for Spark tests

I'm working on testing a Spark job, by submitting it to a local Yarn implementation. To do that, spark-yarn is required. However, when I try to run yarnLocalCluster.start() after building with it, I get following error:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: method <init>(Ljava/lang/String;)V not found

Looks like some kind of conflict in versions of Yarn components. Any workarounds until it's fixed?

pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.test.framework</groupId>
    <artifactId>test-sample</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>com.github.sakserv</groupId>
            <artifactId>hadoop-mini-clusters</artifactId>
            <version>0.1.13</version>
        </dependency>
        <dependency>
            <groupId>com.github.sakserv</groupId>
            <artifactId>hadoop-mini-clusters-common</artifactId>
            <version>0.1.13</version>
        </dependency>
        <dependency>
            <groupId>com.github.sakserv</groupId>
            <artifactId>hadoop-mini-clusters-yarn</artifactId>
            <version>0.1.13</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-yarn_2.11</artifactId>
            <version>2.2.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
</project>

TestSample.java:

package org.test.framework;

import org.apache.hadoop.conf.Configuration;

import com.github.sakserv.minicluster.impl.YarnLocalCluster;

public class TestSample {
    public static void main(String[] args) throws Exception {
        YarnLocalCluster yarnLocalCluster = new YarnLocalCluster.Builder()
                .setNumNodeManagers(1)
                .setNumLocalDirs(1)
                .setNumLogDirs(1)
                .setResourceManagerAddress("localhost:8032")
                .setResourceManagerHostname("localhost")
                .setResourceManagerSchedulerAddress("localhost:8030")
                .setResourceManagerResourceTrackerAddress("localhost:8031")
                .setResourceManagerWebappAddress("localhost:8042")
                .setUseInJvmContainerExecutor(false)
                .setConfig(new Configuration())
                .build();
        yarnLocalCluster.start();
    }
}

Full backtrace:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: method <init>(Ljava/lang/String;)V not found
	at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.<init>(ContainerAllocationExpirer.java:36)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:437)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1006)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:266)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:304)
	at org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:99)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:444)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
	at org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:272)
	at com.github.sakserv.minicluster.impl.YarnLocalCluster.start(YarnLocalCluster.java:231)
	at org.test.framework.TestSample.main(TestSample.java:21)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

oozie shared lib

Hi,
I am trying to do the same where I create the LocalOozie instance to test my oozie workflows.
However, when I try to run a workflow against, it I am getting error in the java action, because it says "Could not find Oozie Share Lib".
How does your oozie server create the share lib folder.

thanks
mohnish

Hbase mini cluster is not supporting multiple client connections

I tried running 2 client connection to the already running Hbase mini cluster, however, only the first one works the second one creates an error on hbase server and server stops responding.

Does hbase minicluster allows more than one connection ?
I tried setting .setMaxClientCnxns(10) in zookeeper , but that also didn't help

error message:
up and running...
2017-06-23 07:38:05,809 FATAL RS_CLOSE_REGION-BLRAKU28166400:51269-0 org.apache.hadoop.hbase.regionserver.HRegionServer ABORTING region server blraku28166400.sapient.com,51269,1498199848570: Caught throwable while processing event M_RS_CLOSE_REGION
java.lang.NoSuchMethodError: com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1526)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1367)
at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:138)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

support request - dfs mini cluster

If we want to connect dfs mini cluster like hdfs://dfsminiclusteripaddress:port/ instead of local host. Most of the blogs never taked about remote connection from hadoop client. I
Will this package support for remote connectivity from hadoop client. If not could you please explain what would be the difficulties to acheive connect this cluster from remote machine?.

InJVMContainerExecutor

When I try to use the in jvm container executor I get the following error
java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.v2.app.MRAppMaster

I am running the MiniYarnCluster from within Junit tests.
It ignores the Classpath created by the DefaultContainerExecutor in the shell script.

oozie-mini-cluster fails with "real" workflow. Seems like oozie shared lib misconfigured

Hi, it's successor of #21
PR is here:
#29

So, here the exception. I suppose it's the root cause of failure. This

017-03-08 17:16:00 INFO  JPAService:520 - JPA configuration: DriverClassName=org.hsqldb.jdbcDriver,Url=jdbc:hsqldb:mem:oozie-db;create=true,Username=sa,Password=***,MaxActive=10,TestOnBorrow=false,TestOnReturn=false,TestWhileIdle=false,
2017-03-08 17:16:00 INFO  CodecFactory:520 - Using gz as output compression codec
2017-03-08 17:16:00 INFO  ActionService:520 - Initialized action types: [hive, map-reduce, :FORK:, :JOIN:, :END:, hive2, ssh, fs, switch, pig, distcp, java, shell, spark, :START:, sqoop, :KILL:, email, sub-workflow]
2017-03-08 17:16:00 ERROR ShareLibService:517 - org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
	at org.apache.oozie.service.ShareLibService.init(ShareLibService.java:132)
	at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
	at org.apache.oozie.service.Services.setService(Services.java:372)
	at org.apache.oozie.service.Services.loadServices(Services.java:305)
	at org.apache.oozie.service.Services.init(Services.java:213)
	at org.apache.oozie.local.LocalOozie.start(LocalOozie.java:64)
	at com.github.sakserv.minicluster.impl.OozieLocalServer.start(OozieLocalServer.java:255)
	at com.github.sakserv.minicluster.impl.OozieLocalServerIntegrationTest.setUp(OozieLocalServerIntegrationTest.java:103)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost:20112/tmp/share_lib, expected: file:///

kafka_2.10 to kafka_2.11

plz upgrade kafka_2.10 to kafka_2.11. or else exception throws .

java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at kafka.message.MessageSet.<init>(MessageSet.scala:71)
	at kafka.message.ByteBufferMessageSet.<init>(ByteBufferMessageSet.scala:259)
	at kafka.message.MessageSet$.<init>(MessageSet.scala:31)
	at kafka.message.MessageSet$.<clinit>(MessageSet.scala)
	at kafka.server.Defaults$.<init>(KafkaConfig.scala:49)
	at kafka.server.Defaults$.<clinit>(KafkaConfig.scala)
	at kafka.log.Defaults$.<init>(LogConfig.scala:35)
	at kafka.log.Defaults$.<clinit>(LogConfig.scala)
	at kafka.log.LogConfig$.<init>(LogConfig.scala:248)
	at kafka.log.LogConfig$.<clinit>(LogConfig.scala)
	at kafka.server.KafkaConfig$.<init>(KafkaConfig.scala:270)
	at kafka.server.KafkaConfig$.<clinit>(KafkaConfig.scala)
	at kafka.server.KafkaConfig.fromProps(KafkaConfig.scala)
	at com.github.sakserv.minicluster.impl.KafkaLocalBroker.configure(KafkaLocalBroker.java:223)
	at com.github.sakserv.minicluster.impl.KafkaLocalBroker.start(KafkaLocalBroker.java:164)
	at com.zqh.spark.connectors.kafka.KafkaTestBase.setUp(KafkaTestBase.scala:39)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 33 more

If I exclude kafka_2.10, and use kafka_2.11, still exceptions:

java.lang.NoClassDefFoundError: kafka/utils/Time
	......
	at com.github.sakserv.minicluster.impl.KafkaLocalBroker.start(KafkaLocalBroker.java:194)
	at com.zqh.spark.connectors.kafka.KafkaTestBase.setUp(KafkaTestBase.scala:39)

I'm using 0.1.13

Exception while setting up cluster

I am getting "java.lang.IllegalArgumentException: The value of property bind.address must not be null" while using hdfsmilicluster. can anyone please help me to resolve this issue ?

Code snippet :

static HdfsLocalCluster hdfsLocalCluster = null;

@Before
public void setup() throws Exception{
         System.setProperty("HADOOP_HOME", "D:\\myProject\\hadoopFiles\\hadoop-mini-clusters-master\\" +
				"windows_libs\\2.5.0.0\\");
        Configuration conf = new Configuration();

	hdfsLocalCluster = new HdfsLocalCluster.Builder()
        .setHdfsNamenodePort(12345)
        .setHdfsNamenodeHttpPort(8080)
        .setHdfsTempDir("embedded_hdfs")
        .setHdfsNumDatanodes(1)
        .setHdfsEnablePermissions(false)
        .setHdfsFormat(true)
        .setHdfsEnableRunningUserAsProxyUser(true)
        .setHdfsConfig(conf)
        .build();
        try{
		hdfsLocalCluster.start();
	}catch(Exception ex){
			System.out.println("Excption :");
			ex.printStackTrace();
			throw new Exception(ex.getMessage());
	}
}

Exception :

java.lang.IllegalArgumentException: The value of property bind.address must not be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1142)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1123)
at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:449)
at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:398)
at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:110)
at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:324)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:155)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:834)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:692)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1151)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1026)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:819)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:479)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
at com.github.sakserv.minicluster.impl.HdfsLocalCluster.start(HdfsLocalCluster.java:179)
at com.symantec.adl.sdk.trident.utils.SetupTestClusters.setupHDFSLocalCluster(SetupTestClusters.java:113)
at test.java.TestReadHDFS.setup(TestReadHDFS.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

I wasn't able to run oozie-mini-cluster with "real" workflow

Root cause: something weird with share lib.
Here is the code for com.github.sakserv.minicluster.impl.OozieLocalServerIntegrationTest that reproduces issue.

 @Test
    public void testSubmitWorkflow() throws Exception {

        LOG.info("OOZIE: Test Submit Workflow Start");

        FileSystem hdfsFs = hdfsLocalCluster.getHdfsFileSystemHandle();
        OozieClient oozie = oozieLocalServer.getOozieClient();

        Path appPath = new Path(hdfsFs.getHomeDirectory(), "testApp");
        hdfsFs.mkdirs(new Path(appPath, "lib"));
        Path workflow = new Path(appPath, "workflow.xml");

        //write workflow.xml
        String wfApp = "<workflow-app name=\"sugar-option-decision\" xmlns=\"uri:oozie:workflow:0.5\">\n" +
                "  <global>\n" +
                "    <job-tracker>${jobTracker}</job-tracker>\n" +
                "    <name-node>${nameNode}</name-node>\n" +
                "  </global>\n" +
                "  <start to=\"first\"/>\n" +
                "  <action name=\"first\">\n" +
                "    <map-reduce> </map-reduce>\n" +
                "    <ok to=\"decision-second-option\"/>\n" +
                "    <error to=\"kill\"/>\n" +
                "  </action>\n" +
                "  <decision name=\"decision-second-option\">\n" +
                "    <switch>\n" +
                "      <case to=\"option\">${doOption}</case>\n" +
                "      <default to=\"second\"/>\n" +
                "    </switch>\n" +
                "  </decision>\n" +
                "  <action name=\"option\">\n" +
                "    <map-reduce> </map-reduce>\n" +
                "    <ok to=\"second\"/>\n" +
                "    <error to=\"kill\"/>\n" +
                "  </action>\n" +
                "  <action name=\"second\">\n" +
                "    <map-reduce> </map-reduce>\n" +
                "    <ok to=\"end\"/>\n" +
                "    <error to=\"kill\"/>\n" +
                "  </action>\n" +
                "  <kill name=\"kill\">\n" +
                "    <message>\n" +
                "      Failed to workflow, error message[${wf: errorMessage (wf: lastErrorNode ())}]\n" +
                "    </message>\n" +
                "  </kill>\n" +
                "  <end name=\"end\"/>\n" +
                "</workflow-app>";

        Writer writer = new OutputStreamWriter(hdfsFs.create(workflow));
        writer.write(wfApp);
        writer.close();

        //write job.properties
        Properties conf = oozie.createConfiguration();
        conf.setProperty(OozieClient.APP_PATH, workflow.toString());
        conf.setProperty(OozieClient.USER_NAME, UserGroupInformation.getCurrentUser().getUserName());
        conf.setProperty("nameNode", "hfds://localhost:" + hdfsLocalCluster.getHdfsNamenodePort());
        conf.setProperty("jobTracker", mrLocalCluster.getResourceManagerAddress());
        conf.setProperty("doOption", "true");

        //submit and check
        final String jobId = oozie.run(conf);
        WorkflowJob wf = oozie.getJobInfo(jobId);
        assertNotNull(wf);
        assertEquals(WorkflowJob.Status.RUNNING, wf.getStatus());


        while(true){
            Thread.sleep(1000);
            wf = oozie.getJobInfo(jobId);
            if(wf.getStatus() == WorkflowJob.Status.FAILED || wf.getStatus() == WorkflowJob.Status.KILLED || wf.getStatus() == WorkflowJob.Status.PREP || wf.getStatus() == WorkflowJob.Status.SUCCEEDED){
                break;
            }
        }

        wf = oozie.getJobInfo(jobId);
        assertEquals(WorkflowJob.Status.SUCCEEDED, wf.getStatus());

        LOG.info("OOZIE: Workflow: {}", wf.toString());
        hdfsFs.close();
    }

I would like to try to debug it. Where do oozie logs go?

HBase dependencies out of synch

I'm trying to use a HBaseLocalCluster, but I've noticed that the HBase dependencies in your POM are out of synch with the rest of the dependencies.

<hbase.version>0.98.4.2.2.6.9-1-hadoop2</hbase.version>

The HBase dependencies are at version 0.98.4.2.2.6.9-1-hadoop2, which corresponds to HDP version 2.2.6, while the rest of the dependencies correspond to HDP version 2.3.0.

Also, since this is a fat JAR, I can't just override these dependencies in my own POM. Are there any plans to release a skinny JAR? This would also allow me to completely exclude dependencies I'm not using right now (Mongo, MySQL, ActiveMQ etc), without having to consume a 100MB+ fat JAR.

HBase mini cluster fat jar

Mini-cluster for HBase use Shade plugin for preparing the artifact. It is packing humongous amount of libs inside that artifact without actual relocating of that packages.
Is it done on purpose?

Connecting to Storm MiniCluster via NimbusClient

Is it possible to connect via NimbusClient via MiniCluster?

List<String> stormNimbusSeeds = new ArrayList<>();
stormNimbusSeeds.add("localhost");
stormNimbusSeeds.add("192.168.26.139");
conf.put(Config.NIMBUS_SEEDS, stormNimbusSeeds);
ZookeeperLocalCluster zookeeperLocalCluster = new ZookeeperLocalCluster.Builder()
        .setPort(12345)
        .setTempDir("embedded_zookeeper")
        .setZookeeperConnectionString("localhost:12345")
        .setMaxClientCnxns(60)
        .setElectionPort(20001)
        .setQuorumPort(20002)
        .setDeleteDataDirectoryOnClose(false)
        .setServerId(1)
        .setTickTime(2000)
        .build();
zookeeperLocalCluster.start();
StormLocalCluster stormLocalCluster = new StormLocalCluster.Builder()
            .setZookeeperHost("localhost")
            .setZookeeperPort(12345L)
            .setEnableDebug(true)
            .setNumWorkers(1)
            .setStormConfig(new Config())
            .build();
stormLocalCluster.start();
NimbusClient nimbusClient = NimbusClient.getConfiguredClient(conf);

09:47:11.280 [main] WARN org.apache.storm.utils.NimbusClient - Ignoring exception while trying to get leader nimbus info from localhost. will retry with a different seed host.
java.lang.RuntimeException: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)

WindowsLibsUtils hard-codes relative path to DLLs

The WindowsLibsUtils initialization uses a hard-coded relative path to find the DLLs needed to run Hadoop/HDFS. If the hadoop-mini-clusters dependency is used in a multi-module project, I need to have the DLLs available in each project (or actually, in the parent of each project, which is complex when the module graph is a tree).

Would it be possible to make this an injectable system property the same way HBase uses HADOOP_HOME to find the location for winutils.exe?

Thanks!

Unable to Upgrade the Java to Java 11

Getting issue like below :
Unable to upgrade to hadoop and Java.

tools.jar was removed in JDK 9

If we tried to upgrade the hbase server, we are getting below error.

12:59:04 ERROR o.a.hadoop.hbase.MiniHBaseCluster - Error starting cluster
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/conf/ConfigurationObserver
	at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:211)


How can this rep help me?

Sorry, this is really not an issue.
I tried to get your email from your profile, but no success.

I'm trying to study the hadoop ecosystems to see how I can use it to develop a system but one which I can debug on my localhost.

After spending a few minutes checking the project and cloning the project, I still don't understand how can I use this.

Can you please enlighten me?

KafkaServer API changes

kafka.server.KafkaServer appears to change nearly every release. Use reflection to handle the evolution.

junit5

I don't succeed to use Junit 5 with your lib. Any tips ?

PS : awsome work ! GJ !

Wrong FS Error when trying to run Local Oozie Server together with Mini HDFS Cluster

I'm working on a setup for testing oozie workflows in IDE. An attempt to start Oozie Local Server that uses Mini HDFS Cluster instance fails with following error:

2017-08-16 13:20:06 WARN  SchedulerService:523 - Error executing runnable [], Wrong FS: hdfs://localhost:12345/tmp/oozie_share_lib, expected: file:///
java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost:12345/tmp/oozie_share_lib, expected: file:///
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665)
	at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87)
	at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:440)
	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1547)
	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1590)
	at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:676)
	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1547)
	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1590)
	at org.apache.oozie.service.ShareLibService.purgeLibs(ShareLibService.java:491)
	at org.apache.oozie.service.ShareLibService.access$000(ShareLibService.java:64)
	at org.apache.oozie.service.ShareLibService$1.run(ShareLibService.java:153)
	at org.apache.oozie.service.SchedulerService$2.run(SchedulerService.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

TestSample.java:

package org.test.framework;
import com.google.common.collect.Lists;
import org.apache.hadoop.conf.Configuration;

import com.github.sakserv.minicluster.impl.HdfsLocalCluster;
import com.github.sakserv.minicluster.impl.OozieLocalServer;
import com.github.sakserv.minicluster.oozie.sharelib.Framework;

public class TestSample {
    public static void main(String[] args) throws Exception {
        HdfsLocalCluster hdfsLocalCluster = new HdfsLocalCluster.Builder()
                .setHdfsNamenodePort(12345)
                .setHdfsNamenodeHttpPort(12341)
                .setHdfsTempDir("embedded_hdfs")
                .setHdfsNumDatanodes(1)
                .setHdfsEnablePermissions(false)
                .setHdfsFormat(true)
                .setHdfsEnableRunningUserAsProxyUser(true)
                .setHdfsConfig(new Configuration())
                .build();

        hdfsLocalCluster.start();

        OozieLocalServer oozieLocalServer = new OozieLocalServer.Builder()
                .setOozieTestDir("embedded_oozie")
                .setOozieHomeDir("oozie_home")
                .setOozieUsername(System.getProperty("user.name"))
                .setOozieGroupname("testgroup")
                .setOozieYarnResourceManagerAddress("localhost")
                .setOozieHdfsDefaultFs("hdfs://localhost:12345")
                .setOozieConf(new Configuration())
                .setOozieHdfsShareLibDir("/tmp/oozie_share_lib")
                .setOozieShareLibCreate(Boolean.TRUE)
                .setOozieLocalShareLibCacheDir("share_lib_cache")
                .setOoziePurgeLocalShareLibCache(Boolean.FALSE)
                .setOozieShareLibFrameworks(
                        Lists.newArrayList(Framework.MAPREDUCE_STREAMING, Framework.OOZIE))
                .build();
        oozieLocalServer.start();
    }
}

I've followed example instructions in README.MD, just changed the port to match HDFS cluster's. Not sure if related to #30

Possible errors in examples

All examples that set resource manager hostname and address have those two swapped.

.setResourceManagerAddress("localhost")
.setResourceManagerHostname("localhost:37001")

Should be:

.setResourceManagerAddress("localhost:37001")
.setResourceManagerHostname("localhost")

Oozie sharelib location issue

Hello @sakserv,

Thanks for this powerfull tool which makes easy to build testing tools around hadoop ecosystem.

I encounter a problem during Oozie workflow launching, I think Oozie does not locate the sharelib which is installed on /user/root/share/lib/oozie/lib_20181217163941/share/lib

OOZIE: Writing share lib contents to: /user/root/share/lib/oozie/lib_20181217163941/share/lib

Oozie workflow logs:

ACTION[0000000-181217163855317-oozie-root-W@shell-node] Error starting action [shell-node]. ErrorType [FAILED], ErrorCode [It should never happen], Message [File /user/root/share/lib does not exist]
org.apache.oozie.action.ActionExecutorException: File /user/root/share/lib does not exist
	at org.apache.oozie.action.hadoop.JavaActionExecutor.addSystemShareLibForAction(JavaActionExecutor.java:785)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.addAllShareLibs(JavaActionExecutor.java:863)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.setLibFilesArchives(JavaActionExecutor.java:854)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1099)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1379)
	at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
	at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:65)
	at org.apache.oozie.command.XCommand.call(XCommand.java:287)
	at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:331)
	at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:260)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:178)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

The share lib may be copied into /user/root/share/lib/lib_20181217163941 instead of /user/root/share/lib/oozie/lib_20181217163941/share/lib maybe ?

Regards,
Zied

nullPointerException property-parser

I've got a nullPointerException when I use propertyParser.getProperty(ConfigVars.HDFS_ENABLE_RUNNING_USER_AS_PROXY_USER).

I use property-parser 0.0.3 and hadoop-minicluster 0.1.16

Class not found Exception while running HBASE Mini Cluster (org.apache.hadoop.hbase.backup.impl.BackupException)

I have an existing project running storm that writes into HBASE. As a result i use HBASE client libs. When i run this i get a class not found exception.
I am pretty sure it is a class conflict with the HBASE client libraries , as i can get this to run clean on a project that has no HBASE client dependencies.
I use an older version of HBASE libs
org.apache.hbase
hbase-client
1.1.2
and i tried multiple versions of the mini cluster to see if it works (0.1.6 to 0.1.14). Would anyone know which versions work well together ? or rather for 1.1.2 is there a compatible version
Error.log

problem with org.apache.hive.hive-jdbc version

HI,
I have a problem when I add a dependency in my pom.xml

<dependency>
       <groupId>org.apache.hive</groupId>
      <artifactId>hive-jdbc</artifactId>
      <version>1.2.1</version>
</dependency>

It throw this exception : Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy

Any idee ? Witch version I have to use ?

Insert with Spark 2 Sql Context on Hadoop Mini clusters

I'm trying to use Spark SQL Context with Hadoop-mini-clusters, specifically with inserts from Spark SQL context. Could this be made possible?

I have HIVE, Thrift server and Spark SQL context working and creating tables and select statements work fine. But I wanted to insert test data using Spark sql context like this but I get this exception: java.lang.IncompatibleClassChangeError: Implementing class

String warehouseLocation = "file:${system:user.dir}/" + hiveLocalMetaStore.getHiveWarehouseDir();
spark = SparkSession
.builder()
.appName(PersonControlTest.class.getName())
.config("spark.sql.warehouse.dir", warehouseLocation)
.config("hive.metastore.uris","thrift://localhost:" + hiveLocalMetaStore.getHiveMetastorePort())
.config("spark.master", "local[2]")
.enableHiveSupport()
.getOrCreate();
spark.sql("INSERT INTO TABLE test VALUES ('firstname', 'lastname')");

[main] INFO org.apache.hive.service.AbstractService - Service:SessionManager is stopped.[main] INFO org.apache.hive.service.AbstractService - Service:SessionManager is stopped.

java.lang.IncompatibleClassChangeError: Implementing class
at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:226) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:142) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:310) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) at org.apache.spark.sql.Dataset.(Dataset.scala:186) at org.apache.spark.sql.Dataset.(Dataset.scala:167) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)

NPE when using KdcLocalCluster

On version 0.1.14 an NPE happens when using KdcLocalCluster in KeyStoreTestUtil on line 51. This code inspects the URL provided from the classloader in order to find the classpath of the local cluster.

The code as written will work for the build before sources are archived but will not work when the classes are in JAR files. The reason for this is that url.toURI().getPath() will return null. Its not clear why this is the case but string manipulation based on the first occurrence of / would probably work better.

Additionally, the substring manipulation used in this class won't remove the !, and it will only refer to the one JAR file instead of everything in that folder. Its not clear whether or not this is actually desirable - I would need to drill further into where this classpath is used later.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.